拆分 server.py 为模块化路由架构,修复 Token 生成排序问题

This commit is contained in:
lengbone 2026-03-29 21:44:08 +08:00
parent d420d9cec3
commit 9a5cc3e6a0
29 changed files with 1659 additions and 357 deletions

63
CLAUDE.md Normal file
View File

@ -0,0 +1,63 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## 项目概述
RTC AIGC Demo — 基于火山引擎 RTC SDK 的实时 AI 语音对话演示应用,前后端分离架构。前端 React + TypeScript后端 Python FastAPI。
## 常用命令
### 前端 (frontend/)
```bash
cd frontend
npm install # 安装依赖
npm run dev # 开发服务器 (localhost:3000)
npm run build # 生产构建
npm run eslint # ESLint 检查并修复
npm run stylelint # LESS 样式检查
npm run prettier # 代码格式化
npm run test # 运行测试
```
### 后端 (backend/)
```bash
cd backend
cp .env.example .env # 首次需复制环境变量配置
uv sync # 安装依赖(使用 uv 包管理器)
uv run uvicorn server:app --host 0.0.0.0 --port 3001 --reload # 启动开发服务器
```
## 架构
### 前后端通信
- 前端默认连接 `http://localhost:3001`(配置在 `frontend/src/config/index.ts``AIGC_PROXY_HOST`
- 后端 FastAPI 入口:`backend/server.py`
### 前端核心模块
- **状态管理**: Redux Toolkit两个 slice`store/slices/room.ts`(房间状态)、`store/slices/device.ts`(设备状态)
- **RTC 封装**: `src/lib/RtcClient.ts` 封装 `@volcengine/rtc` SDK
- **API 层**: `src/app/api.ts` 定义 `getScenes`、`StartVoiceChat`、`StopVoiceChat` 接口
- **页面结构**: `pages/MainPage/` 包含 Room通话中和 Antechamber通话前两个主要区域
- **路径别名**: `@/``src/`(通过 craco + tsconfig paths 配置)
- **UI 组件库**: Arco Design
- **CSS**: LESS
### 后端核心模块
- **场景配置**: `config/custom_scene.py` — 从环境变量构建场景配置,自动生成 RoomId/UserId/Token
- **API 代理**: `/proxy` 端点转发请求到火山引擎 RTC OpenAPI含请求签名
- **LLM 集成**: `services/local_llm_service.py` — Ark SDK 对接SSE 流式响应
- **请求签名**: `security/signer.py`
- **Token 生成**: `security/rtc_token.py`
### LLM 模式
通过 `CUSTOM_LLM_MODE` 环境变量切换:
- `ArkV3`: 直接使用火山方舟 LLM
- `CustomLLM`: 自定义 LLM 回调(`/api/chat_callback` 端点)
### 关键环境变量backend/.env
- `CUSTOM_ACCESS_KEY_ID` / `CUSTOM_SECRET_KEY`: 火山引擎凭证
- `CUSTOM_RTC_APP_ID` / `CUSTOM_RTC_APP_KEY`: RTC 应用配置
- `CUSTOM_LLM_MODE` + 对应 LLM 配置
- `CUSTOM_ASR_APP_ID` / `CUSTOM_TTS_APP_ID`: 语音识别/合成配置
- `CUSTOM_AVATAR_*`: 数字人配置(可选)

View File

@ -1,7 +1,7 @@
# 交互式AIGC场景 AIGC Demo
此 Demo 为简化版本, 如您有 1.5.x 版本 UI 的诉求, 可切换至 1.5.1 分支。
跑通阶段时, 无须关心代码实现,仅需按需完成 `backend/scenes/*.json` 的场景信息填充即可
跑通阶段时, 无须关心代码实现。当前推荐直接使用 `backend/.env` + `backend/config/custom_scene.py` 完成 `Custom` 场景配置
## 简介
- 在 AIGC 对话场景下,火山引擎 AIGC-RTC Server 云端服务,通过整合 RTC 音视频流处理ASR 语音识别,大模型接口调用集成,以及 TTS 语音生成等能力提供基于流式语音的端到端AIGC能力链路。
@ -22,28 +22,66 @@
开通 ASR、TTS、LLM、RTC 等服务,可参考 [开通服务](https://www.volcengine.com/docs/6348/1315561?s=g) 进行相关服务的授权与开通。
### 3. 场景配置
`backend/scenes/*.json`
当前真正生效的主配置入口是 `backend/.env` + `backend/config/custom_scene.py`
您可以自定义具体场景, 并按需根据模版填充 `SceneConfig`、`AccountConfig`、`RTCConfig`、`VoiceChat` 中需要的参数。
Demo 中以 `Custom` 场景为例,您可以自行新增场景。
Demo 中以 `Custom` 场景为例,您也可以自行新增其他 JSON 场景。
`Custom` 场景建议先执行以下步骤:
```shell
cp backend/.env.example backend/.env
```
注意:
- `SceneConfig`:场景的信息,例如名称、头像等。
- `AccountConfig`场景下的账号信息https://console.volcengine.com/iam/keymanage/ 获取 AK/SK。
- `AccountConfig``Custom` 场景默认从 `backend/.env` 读取 AK/SK其他场景仍在 JSON 中配置
- `RTCConfig`:场景下的 RTC 配置。
- AppId、AppKey 可从 https://console.volcengine.com/rtc/aigc/listRTC 中获取。
- `Custom` 场景的 AppId、AppKey、RoomId、UserId、Token 可通过 `backend/.env` 注入。
- RoomId、UserId 可自定义也可不填,交由服务端生成。
- `VoiceChat`: 场景下的 AIGC 配置。
- `Custom` 场景的 TaskId、Agent 用户信息、欢迎语、System Message 以及 LLM 模式参数均通过 `backend/.env` 注入。
- 支持 `ArkV3``CustomLLM` 两种模式;`CustomLLM` 当前推荐“当前 backend 内置本地回调 + ngrok 暴露公网地址”这条接法。
- 可参考 https://www.volcengine.com/docs/6348/1558163 中参数描述,完整填写参数内容。
- 可通过 [快速跑通 Demo](https://console.volcengine.com/rtc/aigc/run?s=g) 快速获取参数, 跑通后点击右上角 `接入 API` 按钮复制相关代码贴到 JSON 配置文件中即可。
- `ASRConfig`、`TTSConfig`、`AvatarConfig` 等复杂结构由 `backend/config/custom_scene.py` 维护默认值,并从 `backend/.env` 读取关键运行参数。
- 当前首版默认不启用 RAG 主链路,`backend/services/rag_service.py` 仅保留为后续扩展位。
- 可通过 [快速跑通 Demo](https://console.volcengine.com/rtc/aigc/run?s=g) 快速获取参数,再分别填入 `backend/.env``backend/config/custom_scene.py` 的默认结构中。
### 第三方 CustomLLM 接入
如果你要把本地服务接成 `CustomLLM`,推荐直接让当前 `backend` 自己提供回调接口:
```dotenv
CUSTOM_LLM_MODE=CustomLLM
CUSTOM_LLM_URL=http://127.0.0.1:3001/api/chat_callback
CUSTOM_LLM_MODEL_NAME=my-model
CUSTOM_LLM_API_KEY=your-callback-token
ARK_API_KEY=your-ark-api-key
ARK_ENDPOINT_ID=your-ark-endpoint-id
```
本地起好 `backend` 以后,用 `ngrok` 暴露 `3001` 端口,再把 `CUSTOM_LLM_URL` 改成公网地址:
```dotenv
CUSTOM_LLM_URL=https://your-ngrok-domain.ngrok-free.app/api/chat_callback
```
说明:
- `CUSTOM_LLM_URL` 是写进 `StartVoiceChat.LLMConfig.Url` 的地址
- 默认可以先用本地地址启动服务,等 `ngrok` 跑起来后再改成公网 `https` 地址
- 当前 backend 内置的固定回调路由是 `POST /api/chat_callback`
- `RTC_OPENAPI_VERSION` 默认使用 `2025-06-01`
## 快速开始
请注意,服务端和 Web 端都需要启动, 启动步骤如下:
### 后端服务Python FastAPI
```shell
cd backend
cp .env.example .env
uv sync
uv run uvicorn main:app --host 0.0.0.0 --port 3001 --reload
uv run uvicorn server:app --host 0.0.0.0 --port 3001 --reload
```
### 前端页面
@ -56,16 +94,16 @@ npm run dev
### 常见问题
| 问题 | 解决方案 |
| :-- | :-- |
| 如何使用第三方模型、Coze Bot | 模型相关配置代码对应目录 `src/config/scenes/` 下json 文件,填写对应官方模型/ Coze/ 第三方模型的参数后,可点击页面上的 "修改 AI 人设" 进行切换。 |
| 如何使用第三方模型、Coze Bot | 当前主配置入口是 `backend/.env` + `backend/config/custom_scene.py`。如果接自己的模型,推荐使用当前 backend 内置的 `/api/chat_callback` 作为 `CustomLLM` 回调接口,再通过 `ngrok` 暴露公网地址,并把它填到 `CUSTOM_LLM_URL`。 |
| **启动智能体之后, 对话无反馈,或者一直停留在 "AI 准备中, 请稍侯";在启用数字人的情况下,一直停留在“数字人准备中,请稍候”** | <li>可能因为控制台中相关权限没有正常授予,请参考[流程](https://www.volcengine.com/docs/6348/1315561?s=g)再次确认下是否完成相关操作。此问题的可能性较大,建议仔细对照是否已经将相应的权限开通。</li><li>参数传递可能有问题, 例如参数大小写、类型等问题,请再次确认下这类型问题是否存在。</li><li>相关资源可能未开通或者用量不足/欠费,请再次确认。</li><li>**请检查当前使用的模型 ID / 数字人 AppId / Token 等内容都是正确且可用的。**</li><li>数字人服务有并发限制,当达到并发限制时,同样会表现为一直停留在“数字人准备中”状态</li> |
| **浏览器报了 `Uncaught (in promise) r: token_error` 错误** | 请检查您填在项目中的 RTC Token 是否合法,检测用于生成 Token 的 UserId、RoomId 以及 Token 本身是否与项目中填写的一致;或者 Token 可能过期, 可尝试重新生成下。 |
| **[StartVoiceChat]Failed(Reason: The task has been started. Please do not call the startup task interface repeatedly.)** 报错 | 如果设置的 RoomId、UserId 为固定值,重复调用 startAgent 会导致出错,只需先调用 stopAgent 后再重新 startAgent 即可。 |
| 为什么麦克风、摄像头开启失败?浏览器报了`TypeError: Cannot read properties of undefined (reading 'getUserMedia')` | 检查当前页面是否为[安全上下文](https://developer.mozilla.org/zh-CN/docs/Web/Security/Secure_Contexts)(简单来说,检查当前页面是否为 `localhost` 或者 是否为 https 协议)。浏览器[限制](https://developer.mozilla.org/zh-CN/docs/Web/Security/Secure_Contexts/features_restricted_to_secure_contexts) `getUserMedia` 只能在安全上下文中使用。 |
| 为什么我的麦克风正常、摄像头也正常,但是设备没有正常工作? | 可能是设备权限未授予,详情可参考 [Web 排查设备权限获取失败问题](https://www.volcengine.com/docs/6348/1356355?s=g)。 |
| 接口调用时, 返回 "Invalid 'Authorization' header, Pls check your authorization header" 错误 | `backend/scenes/*.json` 中的 AK/SK 不正确 |
| 接口调用时, 返回 "Invalid 'Authorization' header, Pls check your authorization header" 错误 | `Custom` 场景请检查 `backend/.env` 中的 `CUSTOM_ACCESS_KEY_ID` / `CUSTOM_SECRET_KEY`;其他场景请检查对应 `backend/scenes/*.json` 中的 AK/SK |
| 什么是 RTC | **R**eal **T**ime **C**ommunication, RTC 的概念可参考[官网文档](https://www.volcengine.com/docs/6348/66812?s=g)。 |
| 不清楚什么是主账号,什么是子账号 | 可以参考[官方概念](https://www.volcengine.com/docs/6257/64963?hyperlink_open_type=lark.open_in_browser&s=g) 。|
| 我有自己的服务端了, 我应该怎么让前端调用我的服务端呢 | 修改 `src/config/index.ts` 中的 `AIGC_PROXY_HOST` 请求域名和接口并在 `src/app/api.ts` 中修改接口参数配置 `APIS_CONFIG` |
| 我有自己的服务端了, 我应该怎么让前端调用我的服务端呢 | 修改 `frontend/src/config/index.ts` 中的 `AIGC_PROXY_HOST` 请求域名;如需同步调整接口路由,可再看 `frontend/src/app/api.ts` 里的 `BasicAPIs` / `AigcAPIs` |
如果有上述以外的问题,欢迎联系我们反馈。
@ -97,4 +135,4 @@ npm run dev
- 更新 UI 和参数配置方式
- 更新 Readme 文档
- 追加 Node 服务的参数检测能力
- 追加 Node 服务的 Token 生成能力
- 追加 Node 服务的 Token 生成能力

84
backend/.env.example Normal file
View File

@ -0,0 +1,84 @@
# RTC OpenAPI 版本
RTC_OPENAPI_VERSION=2025-06-01
# Custom 场景基础配置
CUSTOM_ACCESS_KEY_ID=your-access-key-id
CUSTOM_SECRET_KEY=your-secret-key
CUSTOM_RTC_APP_ID=your-rtc-app-id
CUSTOM_RTC_APP_KEY=
# 留空时由服务端自动生成 RoomId / UserId / Token
CUSTOM_RTC_ROOM_ID=
CUSTOM_RTC_USER_ID=
CUSTOM_RTC_TOKEN=
# Custom 场景业务配置
CUSTOM_TASK_ID=your-task-id
CUSTOM_AGENT_USER_ID=your-agent-user-id
CUSTOM_AGENT_TARGET_USER_ID=
CUSTOM_AGENT_WELCOME_MESSAGE=你好,我是小乖,有什么需要帮忙的吗?
CUSTOM_SCENE_NAME=自定义助手
CUSTOM_SCENE_ICON=https://lf3-rtc-demo.volccdn.com/obj/rtc-aigc-assets/DoubaoAvatar.png
CUSTOM_INTERRUPT_MODE=0
# LLM 公共配置
CUSTOM_LLM_MODE=ArkV3
CUSTOM_LLM_SYSTEM_MESSAGE=你是小乖,性格幽默又善解人意。你在表达时需简明扼要,有自己的观点。
CUSTOM_LLM_VISION_ENABLE=false
CUSTOM_LLM_THINKING_TYPE=disabled
# ArkV3 模式
CUSTOM_LLM_ENDPOINT_ID=your-ark-endpoint-id
# CustomLLM 模式
# 本地调试时,可先保持默认本地回调地址。
# 等 ngrok 跑起来后,再把 CUSTOM_LLM_URL 改成公网 https 地址,例如:
# https://your-ngrok-domain.ngrok-free.app/api/chat_callback
CUSTOM_LLM_URL=http://127.0.0.1:3001/api/chat_callback
# 火山调用当前 backend 的 /api/chat_callback 时使用的 Bearer Token可留空
CUSTOM_LLM_API_KEY=
CUSTOM_LLM_MODEL_NAME=
CUSTOM_LLM_HISTORY_LENGTH=
CUSTOM_LLM_PREFILL=
CUSTOM_LLM_CUSTOM=
CUSTOM_LLM_EXTRA_HEADER_JSON=
CUSTOM_LLM_ENABLE_PARALLEL_TOOL_CALLS=
CUSTOM_LLM_TEMPERATURE=
CUSTOM_LLM_TOP_P=
CUSTOM_LLM_MAX_TOKENS=
# 当前 backend 内置的本地 Ark 回调配置
# /api/chat_callback 会直接使用这组配置调用方舟
ARK_API_KEY=
ARK_ENDPOINT_ID=
ARK_BASE_URL=https://ark.cn-beijing.volces.com/api/v3
ARK_TIMEOUT_SECONDS=1800
LOCAL_LLM_SYSTEM_PROMPT= "你是一个测试助手。如果别人问你是谁,你就说你是哈哈哈。"
LOCAL_LLM_TEMPERATURE=0.3
# 可选 RAG 占位配置
# 当前首版默认未启用主链路 RAG如需后续接入可再填写这两个配置
RAG_STATIC_CONTEXT=
RAG_CONTEXT_FILE=
# ASR / TTS
CUSTOM_ASR_APP_ID=your-asr-app-id
CUSTOM_TTS_APP_ID=your-tts-app-id
CUSTOM_ASR_PROVIDER=volcano
CUSTOM_ASR_MODE=smallmodel
CUSTOM_ASR_CLUSTER=volcengine_streaming_common
CUSTOM_TTS_PROVIDER=volcano
CUSTOM_TTS_CLUSTER=volcano_tts
CUSTOM_TTS_VOICE_TYPE=BV001_streaming
CUSTOM_TTS_SPEED_RATIO=1
CUSTOM_TTS_PITCH_RATIO=1
CUSTOM_TTS_VOLUME_RATIO=1
# 数字人配置
CUSTOM_AVATAR_ENABLED=false
CUSTOM_AVATAR_TYPE=3min
CUSTOM_AVATAR_ROLE=250623-zhibo-linyunzhi
CUSTOM_AVATAR_BACKGROUND_URL=
CUSTOM_AVATAR_VIDEO_BITRATE=2000
CUSTOM_AVATAR_APP_ID=
CUSTOM_AVATAR_TOKEN=

View File

@ -1,6 +1,10 @@
# AIGC Backend (Python FastAPI)
# AIGC BackendPython FastAPI
原 Node.js + Koa 服务的 Python 重写版本,使用 FastAPI 框架。
这是当前 Demo 的后端服务,负责三件事:
- 根据 `backend/.env``backend/config/custom_scene.py` 构建 `Custom` 场景配置
- 代理调用火山 RTC OpenAPI
- 在同一个 FastAPI 进程里提供本地 `CustomLLM` 回调接口 `/api/chat_callback`
## 环境要求
@ -12,38 +16,175 @@
uv sync
```
## 场景配置
编辑 `scenes/*.json`,填写以下字段:
| 字段 | 说明 |
|------|------|
| `AccountConfig.accessKeyId` | 火山引擎 AK从 https://console.volcengine.com/iam/keymanage/ 获取 |
| `AccountConfig.secretKey` | 火山引擎 SK |
| `RTCConfig.AppId` | RTC 应用 ID |
| `RTCConfig.AppKey` | RTC 应用 Key用于自动生成 Token |
| `VoiceChat.*` | AIGC 相关配置,参考 https://www.volcengine.com/docs/6348/1558163 |
## 启动服务
## 启动方式
```shell
uvicorn main:app --host 0.0.0.0 --port 3001 --reload
uv run uvicorn server:app --host 0.0.0.0 --port 3001 --reload
```
服务启动后监听 `http://localhost:3001`
服务默认监听 `http://localhost:3001`
## 配置方式
`Custom` 场景固定使用:
- `backend/.env`
- `backend/config/custom_scene.py`
不再依赖 `backend/scenes/Custom.json`
先复制示例配置:
```shell
cp .env.example .env
```
### 必填基础配置
| 变量名 | 说明 |
| --- | --- |
| `CUSTOM_ACCESS_KEY_ID` | 火山引擎 AK |
| `CUSTOM_SECRET_KEY` | 火山引擎 SK |
| `CUSTOM_RTC_APP_ID` | RTC 应用 ID同时作为 `RTCConfig.AppId``VoiceChat.AppId` |
| `CUSTOM_TASK_ID` | AIGC 任务 ID |
| `CUSTOM_AGENT_USER_ID` | 智能体用户 ID |
| `CUSTOM_AGENT_WELCOME_MESSAGE` | 智能体欢迎语 |
| `CUSTOM_LLM_SYSTEM_MESSAGE` | System Message |
| `CUSTOM_ASR_APP_ID` | ASR 应用 ID |
| `CUSTOM_TTS_APP_ID` | TTS 应用 ID |
### RTC 相关配置
| 变量名 | 说明 |
| --- | --- |
| `CUSTOM_RTC_APP_KEY` | 未提供 `CUSTOM_RTC_TOKEN` 时,用于自动生成 Token |
| `CUSTOM_RTC_ROOM_ID` | 房间 ID留空时服务端自动生成 |
| `CUSTOM_RTC_USER_ID` | 用户 ID留空时服务端自动生成 |
| `CUSTOM_RTC_TOKEN` | RTC Token留空时服务端会自动生成 |
| `RTC_OPENAPI_VERSION` | `StartVoiceChat/StopVoiceChat` 使用的 OpenAPI 版本,默认 `2025-06-01` |
### LLM 模式一ArkV3
```dotenv
CUSTOM_LLM_MODE=ArkV3
CUSTOM_LLM_ENDPOINT_ID=your-ark-endpoint-id
```
### LLM 模式二CustomLLM 本地回调
这个模式不是再起一个额外代理服务,而是直接由当前 `backend` 自己对外提供回调接口:
```dotenv
CUSTOM_LLM_MODE=CustomLLM
CUSTOM_LLM_URL=http://127.0.0.1:3001/api/chat_callback
CUSTOM_LLM_API_KEY=your-callback-token
CUSTOM_LLM_MODEL_NAME=my-model
```
推荐调试流程:
1. 先启动当前 `backend`
2. 用 `ngrok` 暴露 `3001` 端口
3. 把 `CUSTOM_LLM_URL` 改成公网地址,例如:
```dotenv
CUSTOM_LLM_URL=https://your-ngrok-domain.ngrok-free.app/api/chat_callback
```
`CUSTOM_LLM_API_KEY` 是火山调用你这个本地回调接口时带上的 Bearer Token如果你不需要这层鉴权可以留空。
### 当前 backend 内置的 Ark 配置
`/api/chat_callback` 内部会直接调用方舟 SDK因此还需要
```dotenv
ARK_API_KEY=your-ark-api-key
ARK_ENDPOINT_ID=your-ark-endpoint-id
ARK_BASE_URL=https://ark.cn-beijing.volces.com/api/v3
ARK_TIMEOUT_SECONDS=1800
LOCAL_LLM_SYSTEM_PROMPT=
LOCAL_LLM_TEMPERATURE=0.3
```
如果 `LOCAL_LLM_SYSTEM_PROMPT` 留空,会回退使用 `CUSTOM_LLM_SYSTEM_MESSAGE`
### 可选 RAG 占位配置
当前仓库内置了一个最小 RAG 占位实现,但首版默认不接入主链路,支持两种输入来源:
```dotenv
RAG_STATIC_CONTEXT=
RAG_CONTEXT_FILE=
```
- `RAG_STATIC_CONTEXT`:直接写死一段知识文本
- `RAG_CONTEXT_FILE`:从本地文件读取全文作为知识上下文
后续如果你要接真实向量检索,直接替换 `services/rag_service.py` 里的 `retrieve`,再把 `server.py` 主链路接回去即可。
### CustomLLM 可选参数
| 变量名 | 说明 |
| --- | --- |
| `CUSTOM_LLM_HISTORY_LENGTH` | 历史轮数 |
| `CUSTOM_LLM_PREFILL` | 是否开启 Prefill |
| `CUSTOM_LLM_CUSTOM` | 透传到请求体的 `custom` 字段 |
| `CUSTOM_LLM_EXTRA_HEADER_JSON` | 额外请求头JSON 对象字符串 |
| `CUSTOM_LLM_ENABLE_PARALLEL_TOOL_CALLS` | 是否开启并行工具调用 |
| `CUSTOM_LLM_TEMPERATURE` | 透传温度参数 |
| `CUSTOM_LLM_TOP_P` | 透传 `top_p` |
| `CUSTOM_LLM_MAX_TOKENS` | 透传 `max_tokens` |
其余 ASR、TTS、数字人相关可选字段请直接参考 `backend/.env.example`
## 接口说明
### POST /getScenes
### `POST /getScenes`
返回所有场景列表,自动生成 RoomId/UserId/Token若未在 JSON 中配置)。
返回场景列表,并自动补齐:
### POST /proxy?Action={Action}&Version={Version}
- `RoomId`
- `UserId`
- `Token`
代理转发至火山引擎 RTC OpenAPI。
### `POST /proxy?Action={Action}&Version={Version}`
支持的 Action
- `StartVoiceChat` — 启动语音对话
- `StopVoiceChat` — 停止语音对话
代理转发到火山 RTC OpenAPI。
请求体需包含 `SceneID` 字段,对应 `scenes/` 目录下的 JSON 文件名(不含扩展名)。
支持:
- `StartVoiceChat`
- `StopVoiceChat`
请求体必须包含 `SceneID`
版本优先级如下:
1. 查询参数里的 `Version`
2. 环境变量 `RTC_OPENAPI_VERSION`
3. 默认值 `2025-06-01`
### `POST /api/chat_callback`
这是当前 backend 内置的 `CustomLLM` 回调接口,也是你配置给火山 `LLMConfig.Url` 的目标地址。
行为说明:
- 接收火山传入的 `messages`
- 可选校验 `Authorization: Bearer <CUSTOM_LLM_API_KEY>`
- 内部调用 `services/local_llm_service.py` 里的 Ark SDK
- 按火山要求返回 `text/event-stream`
- 结束时一定补 `data: [DONE]`
### `POST /debug/chat`
本地调试 LLM 文本流,不经过 RTC。
### `GET /debug/rag?query=...`
本地调试 RAG 返回结果。
## 说明
- 当前前端不会直接决定模型供应商,模型切换统一由后端环境变量控制。
- 如果缺少关键 `CUSTOM_*``ARK_*` 配置,服务会在启动阶段直接报错,而不是进入半可用状态。

View File

@ -0,0 +1 @@
"""Backend config package."""

View File

@ -0,0 +1,225 @@
"""
Copyright 2025 Beijing Volcano Engine Technology Co., Ltd. All Rights Reserved.
SPDX-license-identifier: BSD-3-Clause
"""
from typing import Any
from utils.env import (
env_bool,
env_int,
env_json_object,
env_list,
env_number,
env_optional_bool,
env_optional_int,
env_optional_number,
env_str,
require_env,
set_if_present,
)
CUSTOM_SCENE_ID = "Custom"
DEFAULT_SCENE_NAME = "自定义助手"
DEFAULT_SCENE_ICON = (
"https://lf3-rtc-demo.volccdn.com/obj/rtc-aigc-assets/DoubaoAvatar.png"
)
DEFAULT_LLM_MODE = "ArkV3"
DEFAULT_LLM_THINKING_TYPE = "disabled"
DEFAULT_RTC_OPENAPI_VERSION = "2025-06-01"
DEFAULT_CUSTOM_LLM_CALLBACK_URL = "http://127.0.0.1:3001/api/chat_callback"
SUPPORTED_LLM_MODES = {"ArkV3", "CustomLLM"}
DEFAULT_ASR_PROVIDER = "volcano"
DEFAULT_ASR_MODE = "smallmodel"
DEFAULT_ASR_CLUSTER = "volcengine_streaming_common"
DEFAULT_TTS_PROVIDER = "volcano"
DEFAULT_TTS_CLUSTER = "volcano_tts"
DEFAULT_TTS_VOICE_TYPE = "BV001_streaming"
DEFAULT_AVATAR_TYPE = "3min"
DEFAULT_AVATAR_ROLE = "250623-zhibo-linyunzhi"
DEFAULT_AVATAR_VIDEO_BITRATE = 2000
def get_rtc_openapi_version() -> str:
return env_str("RTC_OPENAPI_VERSION", DEFAULT_RTC_OPENAPI_VERSION)
def build_llm_settings_from_env(missing: list[str]) -> dict[str, Any]:
llm_mode = env_str("CUSTOM_LLM_MODE", DEFAULT_LLM_MODE)
if llm_mode not in SUPPORTED_LLM_MODES:
modes = ", ".join(sorted(SUPPORTED_LLM_MODES))
raise ValueError(f"CUSTOM_LLM_MODE 仅支持以下取值: {modes}")
settings = {
"mode": llm_mode,
"system_message": require_env("CUSTOM_LLM_SYSTEM_MESSAGE", missing),
"vision_enable": env_bool("CUSTOM_LLM_VISION_ENABLE", False),
"thinking_type": env_str(
"CUSTOM_LLM_THINKING_TYPE", DEFAULT_LLM_THINKING_TYPE
),
"api_key": env_str("CUSTOM_LLM_API_KEY"),
"model_name": env_str("CUSTOM_LLM_MODEL_NAME"),
"history_length": env_optional_int("CUSTOM_LLM_HISTORY_LENGTH"),
"prefill": env_optional_bool("CUSTOM_LLM_PREFILL"),
"custom": env_str("CUSTOM_LLM_CUSTOM"),
"extra_header": env_json_object("CUSTOM_LLM_EXTRA_HEADER_JSON"),
"enable_parallel_tool_calls": env_optional_bool(
"CUSTOM_LLM_ENABLE_PARALLEL_TOOL_CALLS"
),
"temperature": env_optional_number("CUSTOM_LLM_TEMPERATURE"),
"top_p": env_optional_number("CUSTOM_LLM_TOP_P"),
"max_tokens": env_optional_int("CUSTOM_LLM_MAX_TOKENS"),
}
if llm_mode == "ArkV3":
settings["endpoint_id"] = require_env("CUSTOM_LLM_ENDPOINT_ID", missing)
return settings
settings["url"] = env_str("CUSTOM_LLM_URL", DEFAULT_CUSTOM_LLM_CALLBACK_URL)
settings["endpoint_id"] = env_str("CUSTOM_LLM_ENDPOINT_ID")
require_env("ARK_API_KEY", missing)
require_env("ARK_ENDPOINT_ID", missing)
return settings
def build_llm_config(llm_settings: dict[str, Any]) -> dict[str, Any]:
llm_config = {
"Mode": llm_settings["mode"],
"SystemMessages": [llm_settings["system_message"]],
"VisionConfig": {
"Enable": llm_settings["vision_enable"],
},
}
if llm_settings["mode"] == "ArkV3":
llm_config["EndPointId"] = llm_settings["endpoint_id"]
llm_config["ThinkingType"] = llm_settings["thinking_type"]
return llm_config
llm_config["Url"] = llm_settings["url"]
if llm_settings["api_key"]:
llm_config["APIKey"] = llm_settings["api_key"]
optional_fields = {
"ModelName": llm_settings["model_name"],
"HistoryLength": llm_settings["history_length"],
"Prefill": llm_settings["prefill"],
"Custom": llm_settings["custom"],
"ExtraHeader": llm_settings["extra_header"],
"EnableParallelToolCalls": llm_settings["enable_parallel_tool_calls"],
"Temperature": llm_settings["temperature"],
"TopP": llm_settings["top_p"],
"MaxTokens": llm_settings["max_tokens"],
}
for key, value in optional_fields.items():
set_if_present(llm_config, key, value)
return llm_config
def build_custom_scene_from_env() -> dict[str, Any]:
missing: list[str] = []
access_key_id = require_env("CUSTOM_ACCESS_KEY_ID", missing)
secret_key = require_env("CUSTOM_SECRET_KEY", missing)
rtc_app_id = require_env("CUSTOM_RTC_APP_ID", missing)
task_id = require_env("CUSTOM_TASK_ID", missing)
agent_user_id = require_env("CUSTOM_AGENT_USER_ID", missing)
welcome_message = require_env("CUSTOM_AGENT_WELCOME_MESSAGE", missing)
asr_app_id = require_env("CUSTOM_ASR_APP_ID", missing)
tts_app_id = require_env("CUSTOM_TTS_APP_ID", missing)
llm_settings = build_llm_settings_from_env(missing)
rtc_app_key = env_str("CUSTOM_RTC_APP_KEY")
rtc_room_id = env_str("CUSTOM_RTC_ROOM_ID")
rtc_user_id = env_str("CUSTOM_RTC_USER_ID")
rtc_token = env_str("CUSTOM_RTC_TOKEN")
if not rtc_token and not rtc_app_key:
missing.append("CUSTOM_RTC_APP_KEY")
if missing:
missing_str = ", ".join(dict.fromkeys(missing))
raise ValueError(f"Custom 场景缺少以下环境变量: {missing_str}")
interrupt_mode = env_int("CUSTOM_INTERRUPT_MODE", 0)
avatar_enabled = env_bool("CUSTOM_AVATAR_ENABLED", False)
target_user_ids = env_list("CUSTOM_AGENT_TARGET_USER_ID")
if not target_user_ids:
target_user_ids = [rtc_user_id or ""]
return {
"SceneConfig": {
"icon": env_str("CUSTOM_SCENE_ICON", DEFAULT_SCENE_ICON),
"name": env_str("CUSTOM_SCENE_NAME", DEFAULT_SCENE_NAME),
},
"AccountConfig": {
"accessKeyId": access_key_id,
"secretKey": secret_key,
},
"RTCConfig": {
"AppId": rtc_app_id,
"AppKey": rtc_app_key,
"RoomId": rtc_room_id,
"UserId": rtc_user_id,
"Token": rtc_token,
},
"VoiceChat": {
"AppId": rtc_app_id,
"RoomId": rtc_room_id,
"TaskId": task_id,
"AgentConfig": {
"TargetUserId": target_user_ids,
"WelcomeMessage": welcome_message,
"UserId": agent_user_id,
"EnableConversationStateCallback": True,
},
"Config": {
"ASRConfig": {
"Provider": env_str("CUSTOM_ASR_PROVIDER", DEFAULT_ASR_PROVIDER),
"ProviderParams": {
"Mode": env_str("CUSTOM_ASR_MODE", DEFAULT_ASR_MODE),
"AppId": asr_app_id,
"Cluster": env_str("CUSTOM_ASR_CLUSTER", DEFAULT_ASR_CLUSTER),
},
},
"TTSConfig": {
"Provider": env_str("CUSTOM_TTS_PROVIDER", DEFAULT_TTS_PROVIDER),
"ProviderParams": {
"app": {
"appid": tts_app_id,
"cluster": env_str(
"CUSTOM_TTS_CLUSTER", DEFAULT_TTS_CLUSTER
),
},
"audio": {
"voice_type": env_str(
"CUSTOM_TTS_VOICE_TYPE", DEFAULT_TTS_VOICE_TYPE
),
"speed_ratio": env_number("CUSTOM_TTS_SPEED_RATIO", 1),
"pitch_ratio": env_number("CUSTOM_TTS_PITCH_RATIO", 1),
"volume_ratio": env_number("CUSTOM_TTS_VOLUME_RATIO", 1),
},
},
},
"LLMConfig": build_llm_config(llm_settings),
"InterruptMode": interrupt_mode,
},
"AvatarConfig": {
"Enabled": avatar_enabled,
"AvatarType": env_str("CUSTOM_AVATAR_TYPE", DEFAULT_AVATAR_TYPE),
"AvatarRole": env_str("CUSTOM_AVATAR_ROLE", DEFAULT_AVATAR_ROLE),
"BackgroundUrl": env_str("CUSTOM_AVATAR_BACKGROUND_URL"),
"VideoBitrate": env_int(
"CUSTOM_AVATAR_VIDEO_BITRATE", DEFAULT_AVATAR_VIDEO_BITRATE
),
"AvatarAppID": env_str("CUSTOM_AVATAR_APP_ID"),
"AvatarToken": env_str("CUSTOM_AVATAR_TOKEN"),
},
"InterruptMode": interrupt_mode,
},
}

View File

@ -1,225 +1,14 @@
"""
Copyright 2025 Beijing Volcano Engine Technology Co., Ltd. All Rights Reserved.
SPDX-license-identifier: BSD-3-Clause
兼容入口
FastAPI backend migrated from Server/app.js (Node.js + Koa)
推荐使用:
uv run uvicorn server:app --host 0.0.0.0 --port 3001 --reload
"""
import json
import os
import time
import uuid
from pathlib import Path
import httpx
from fastapi import FastAPI, Request
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import JSONResponse
from signer import Signer
from rtc_token import AccessToken, privileges
app = FastAPI()
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_methods=["*"],
allow_headers=["*"],
)
SCENES_DIR = Path(__file__).parent / "scenes"
def load_scenes() -> dict:
scenes = {}
for p in SCENES_DIR.glob("*.json"):
with open(p, encoding="utf-8") as f:
scenes[p.stem] = json.load(f)
return scenes
Scenes = load_scenes()
def assert_value(value, msg: str):
if not value or (isinstance(value, str) and " " in value):
raise ValueError(msg)
def error_response(action: str, message: str):
return JSONResponse(
{
"ResponseMetadata": {
"Action": action,
"Error": {"Code": -1, "Message": message},
}
}
)
@app.post("/proxy")
async def proxy(request: Request):
action = request.query_params.get("Action", "")
version = request.query_params.get("Version", "2024-12-01")
try:
assert_value(action, "Action 不能为空")
assert_value(version, "Version 不能为空")
body = await request.json()
scene_id = body.get("SceneID", "")
assert_value(scene_id, "SceneID 不能为空SceneID 用于指定场景的 JSON")
json_data = Scenes.get(scene_id)
if not json_data:
raise ValueError(
f"{scene_id} 不存在,请先在 backend/scenes 下定义该场景的 JSON."
)
voice_chat = json_data.get("VoiceChat", {})
account_config = json_data.get("AccountConfig", {})
assert_value(
account_config.get("accessKeyId"), "AccountConfig.accessKeyId 不能为空"
)
assert_value(
account_config.get("secretKey"), "AccountConfig.secretKey 不能为空"
)
if action == "StartVoiceChat":
req_body = voice_chat
elif action == "StopVoiceChat":
app_id = voice_chat.get("AppId", "")
room_id = voice_chat.get("RoomId", "")
task_id = voice_chat.get("TaskId", "")
assert_value(app_id, "VoiceChat.AppId 不能为空")
assert_value(room_id, "VoiceChat.RoomId 不能为空")
assert_value(task_id, "VoiceChat.TaskId 不能为空")
req_body = {"AppId": app_id, "RoomId": room_id, "TaskId": task_id}
else:
req_body = {}
request_data = {
"region": "cn-north-1",
"method": "POST",
"params": {"Action": action, "Version": version},
"headers": {
"Host": "rtc.volcengineapi.com",
"Content-type": "application/json",
},
"body": req_body,
}
signer = Signer(request_data, "rtc")
signer.add_authorization(account_config)
async with httpx.AsyncClient() as client:
resp = await client.post(
f"https://rtc.volcengineapi.com?Action={action}&Version={version}",
headers=request_data["headers"],
json=req_body,
)
return JSONResponse(resp.json())
except ValueError as e:
return error_response(action, str(e))
except Exception as e:
return error_response(action, str(e))
@app.post("/getScenes")
async def get_scenes():
try:
scenes_list = []
for scene_name, data in Scenes.items():
scene_config = data.get("SceneConfig", {})
rtc_config = data.get("RTCConfig", {})
voice_chat = data.get("VoiceChat", {})
app_id = rtc_config.get("AppId", "")
assert_value(app_id, f"{scene_name} 场景的 RTCConfig.AppId 不能为空")
token = rtc_config.get("Token", "")
user_id = rtc_config.get("UserId", "")
room_id = rtc_config.get("RoomId", "")
app_key = rtc_config.get("AppKey", "")
if app_id and (not token or not user_id or not room_id):
rtc_config["RoomId"] = voice_chat["RoomId"] = room_id or str(
uuid.uuid4()
)
rtc_config["UserId"] = user_id = user_id or str(uuid.uuid4())
if voice_chat.get("AgentConfig") and voice_chat["AgentConfig"].get(
"TargetUserId"
):
voice_chat["AgentConfig"]["TargetUserId"][0] = rtc_config["UserId"]
assert_value(
app_key, f"自动生成 Token 时,{scene_name} 场景的 AppKey 不可为空"
)
key = AccessToken(
app_id, app_key, rtc_config["RoomId"], rtc_config["UserId"]
)
key.add_privilege(privileges["PrivSubscribeStream"], 0)
key.add_privilege(privileges["PrivPublishStream"], 0)
key.expire_time(int(time.time()) + 24 * 3600)
rtc_config["Token"] = key.serialize()
scene_config["id"] = scene_name
scene_config["botName"] = voice_chat.get("AgentConfig", {}).get("UserId")
scene_config["isInterruptMode"] = (
voice_chat.get("Config", {}).get("InterruptMode") == 0
)
scene_config["isVision"] = (
voice_chat.get("Config", {})
.get("LLMConfig", {})
.get("VisionConfig", {})
.get("Enable")
)
scene_config["isScreenMode"] = (
voice_chat.get("Config", {})
.get("LLMConfig", {})
.get("VisionConfig", {})
.get("SnapshotConfig", {})
.get("StreamType")
== 1
)
scene_config["isAvatarScene"] = (
voice_chat.get("Config", {}).get("AvatarConfig", {}).get("Enabled")
)
scene_config["avatarBgUrl"] = (
voice_chat.get("Config", {})
.get("AvatarConfig", {})
.get("BackgroundUrl")
)
rtc_out = {k: v for k, v in rtc_config.items() if k != "AppKey"}
scenes_list.append(
{
"scene": scene_config,
"rtc": rtc_out,
}
)
return JSONResponse(
{
"ResponseMetadata": {"Action": "getScenes"},
"Result": {"scenes": scenes_list},
}
)
except ValueError as e:
return JSONResponse(
{
"ResponseMetadata": {
"Action": "getScenes",
"Error": {"Code": -1, "Message": str(e)},
}
}
)
from server import app
if __name__ == "__main__":
import uvicorn
uvicorn.run("main:app", host="0.0.0.0", port=3001, reload=True)
uvicorn.run("server:app", host="0.0.0.0", port=3001, reload=True)

View File

@ -7,7 +7,9 @@ dependencies = [
"fastapi>=0.110.0",
"uvicorn[standard]>=0.29.0",
"httpx>=0.27.0",
"python-dotenv>=1.2.2",
"python-multipart>=0.0.9",
"volcengine-python-sdk[ark]>=4.0.6",
]
[tool.uv]

View File

@ -0,0 +1 @@
"""路由模块"""

View File

@ -0,0 +1,108 @@
"""
POST /api/chat_callback 自定义 LLM 回调SSE 流式响应
"""
import json
from fastapi import APIRouter, Request
from fastapi.responses import StreamingResponse
from services.local_llm_service import local_llm_service
from services.scene_service import ensure_custom_llm_authorized, get_custom_llm_callback_settings
from utils.responses import custom_llm_error_response
router = APIRouter()
@router.post("/api/chat_callback")
async def chat_callback(request: Request):
try:
settings = get_custom_llm_callback_settings()
ensure_custom_llm_authorized(request, settings["api_key"])
payload = await request.json()
except PermissionError as exc:
return custom_llm_error_response(
str(exc),
code="AuthenticationError",
status_code=401,
)
except json.JSONDecodeError:
return custom_llm_error_response(
"请求体必须是合法的 JSON",
code="BadRequest",
status_code=400,
)
except ValueError as exc:
return custom_llm_error_response(str(exc))
except Exception as exc:
return custom_llm_error_response(
f"解析请求失败: {exc}",
code="InternalError",
status_code=500,
)
messages = payload.get("messages")
if not isinstance(messages, list) or not messages:
return custom_llm_error_response(
"messages 不能为空",
code="BadRequest",
status_code=400,
)
last_message = messages[-1]
if last_message.get("role") != "user":
return custom_llm_error_response(
"最后一条消息必须是用户消息",
code="BadRequest",
status_code=400,
)
try:
stream_iterator = local_llm_service.chat_stream(
history_messages=messages,
request_options={
"temperature": payload.get("temperature"),
"max_tokens": payload.get("max_tokens"),
"top_p": payload.get("top_p"),
},
)
except Exception as exc:
return custom_llm_error_response(
f"初始化本地 LLM 流式调用失败: {exc}",
code="InternalError",
status_code=500,
)
def generate_sse():
has_error = False
try:
for chunk in stream_iterator:
if chunk is None:
continue
if hasattr(chunk, "model_dump_json"):
chunk_json = chunk.model_dump_json()
else:
chunk_json = json.dumps(chunk, ensure_ascii=False)
yield f"data: {chunk_json}\n\n"
except GeneratorExit:
raise
except Exception as exc:
has_error = True
print(f"❌ /api/chat_callback 流式输出失败: {exc}")
if has_error:
print("⚠️ 已提前结束当前 SSE 流")
yield "data: [DONE]\n\n"
return StreamingResponse(
generate_sse(),
status_code=200,
media_type="text/event-stream",
headers={
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
},
)

85
backend/routes/debug.py Normal file
View File

@ -0,0 +1,85 @@
"""
调试端点POST /debug/chat, GET /debug/rag
"""
import json
import time
from fastapi import APIRouter
from fastapi.responses import StreamingResponse
from schemas.chat import DebugChatRequest
from services.local_llm_service import local_llm_service
from services.rag_service import rag_service
router = APIRouter(prefix="/debug")
@router.post("/chat")
async def debug_chat(request: DebugChatRequest):
current_messages = [
{"role": message.role, "content": message.content} for message in request.history
]
current_messages.append({"role": "user", "content": request.question})
start_time = time.time()
stream_iterator = local_llm_service.chat_stream(
history_messages=current_messages,
)
def generate_text():
full_ai_response = ""
total_usage = None
for chunk in stream_iterator:
if chunk is None:
continue
choices = getattr(chunk, "choices", None) or []
if choices:
delta = getattr(choices[0], "delta", None)
content = getattr(delta, "content", None)
if content:
full_ai_response += content
yield content
usage = getattr(chunk, "usage", None)
if usage:
total_usage = usage
print(f"DEBUG: LLM 调用耗时: {time.time() - start_time:.2f}s")
if total_usage:
print(
"🎫 Token 统计: "
f"Total={total_usage.total_tokens} "
f"(P:{total_usage.prompt_tokens}, C:{total_usage.completion_tokens})"
)
new_history = [
{"role": message.role, "content": message.content}
for message in request.history
]
new_history.append({"role": "user", "content": request.question})
new_history.append({"role": "assistant", "content": full_ai_response})
print("\n" + "=" * 50)
print("🐞 调试完成!以下是可用于下次请求的 history 结构:")
print(json.dumps({"history": new_history}, ensure_ascii=False, indent=2))
print("=" * 50 + "\n")
return StreamingResponse(generate_text(), media_type="text/plain")
@router.get("/rag")
async def debug_rag(query: str):
if not query:
return {"error": "请提供 query 参数"}
print(f"🔍 [Debug] 正在检索知识库: {query}")
context = await rag_service.retrieve(query)
return {
"query": query,
"retrieved_context": context,
"length": len(context) if context else 0,
"status": "success" if context else "no_results_or_error",
}

81
backend/routes/proxy.py Normal file
View File

@ -0,0 +1,81 @@
"""
POST /proxy RTC OpenAPI 代理含请求签名
"""
import httpx
from fastapi import APIRouter, Request
from fastapi.responses import JSONResponse
from config.custom_scene import get_rtc_openapi_version
from security.signer import Signer
from services.scene_service import Scenes, prepare_scene_runtime
from utils.responses import error_response
from utils.validation import assert_scene_value, assert_value
router = APIRouter()
@router.post("/proxy")
async def proxy(request: Request):
action = request.query_params.get("Action", "")
version = request.query_params.get("Version") or get_rtc_openapi_version()
try:
assert_value(action, "Action 不能为空")
assert_value(version, "Version 不能为空")
body = await request.json()
scene_id = body.get("SceneID", "")
assert_value(scene_id, "SceneID 不能为空SceneID 用于指定场景配置")
json_data = Scenes.get(scene_id)
if not json_data:
raise ValueError(f"{scene_id} 不存在,请先配置对应场景。")
_, _, voice_chat = prepare_scene_runtime(scene_id, json_data)
account_config = json_data.get("AccountConfig", {})
assert_scene_value(
scene_id, "AccountConfig.accessKeyId", account_config.get("accessKeyId")
)
assert_scene_value(
scene_id, "AccountConfig.secretKey", account_config.get("secretKey")
)
if action == "StartVoiceChat":
req_body = voice_chat
elif action == "StopVoiceChat":
app_id = voice_chat.get("AppId", "")
room_id = voice_chat.get("RoomId", "")
task_id = voice_chat.get("TaskId", "")
assert_scene_value(scene_id, "VoiceChat.AppId", app_id)
assert_scene_value(scene_id, "VoiceChat.RoomId", room_id)
assert_scene_value(scene_id, "VoiceChat.TaskId", task_id)
req_body = {"AppId": app_id, "RoomId": room_id, "TaskId": task_id}
else:
req_body = {}
request_data = {
"region": "cn-north-1",
"method": "POST",
"params": {"Action": action, "Version": version},
"headers": {
"Host": "rtc.volcengineapi.com",
"Content-type": "application/json",
},
"body": req_body,
}
signer = Signer(request_data, "rtc")
signer.add_authorization(account_config)
async with httpx.AsyncClient() as client:
resp = await client.post(
f"https://rtc.volcengineapi.com?Action={action}&Version={version}",
headers=request_data["headers"],
json=req_body,
)
return JSONResponse(resp.json())
except ValueError as e:
return error_response(action, str(e))
except Exception as e:
return error_response(action, str(e))

74
backend/routes/scenes.py Normal file
View File

@ -0,0 +1,74 @@
"""
POST /getScenes 场景列表
"""
from fastapi import APIRouter
from fastapi.responses import JSONResponse
from services.scene_service import Scenes, prepare_scene_runtime
router = APIRouter()
@router.post("/getScenes")
async def get_scenes():
try:
scenes_list = []
for scene_name, data in Scenes.items():
scene_config, rtc_config, voice_chat = prepare_scene_runtime(
scene_name, data
)
scene_config["id"] = scene_name
scene_config["botName"] = voice_chat.get("AgentConfig", {}).get("UserId")
scene_config["isInterruptMode"] = (
voice_chat.get("Config", {}).get("InterruptMode") == 0
)
scene_config["isVision"] = (
voice_chat.get("Config", {})
.get("LLMConfig", {})
.get("VisionConfig", {})
.get("Enable")
)
scene_config["isScreenMode"] = (
voice_chat.get("Config", {})
.get("LLMConfig", {})
.get("VisionConfig", {})
.get("SnapshotConfig", {})
.get("StreamType")
== 1
)
scene_config["isAvatarScene"] = (
voice_chat.get("Config", {}).get("AvatarConfig", {}).get("Enabled")
)
scene_config["avatarBgUrl"] = (
voice_chat.get("Config", {})
.get("AvatarConfig", {})
.get("BackgroundUrl")
)
rtc_out = {k: v for k, v in rtc_config.items() if k != "AppKey"}
scenes_list.append(
{
"scene": scene_config,
"rtc": rtc_out,
}
)
return JSONResponse(
{
"ResponseMetadata": {"Action": "getScenes"},
"Result": {"scenes": scenes_list},
}
)
except ValueError as e:
return JSONResponse(
{
"ResponseMetadata": {
"Action": "getScenes",
"Error": {"Code": -1, "Message": str(e)},
}
}
)

View File

@ -1,75 +0,0 @@
{
"SceneConfig": {
"icon": "https://lf3-rtc-demo.volccdn.com/obj/rtc-aigc-assets/DoubaoAvatar.png",
"name": "自定义助手"
},
"AccountConfig": {
"accessKeyId": "AKLTZWQ0NDljZTZhNjJhNDM4N2I5MjFiZWFkNDg5YWQ4YzI",
"secretKey": "Wm1WbU1EZzNaREl4WlRJNE5HRXhZV0V4TmpRMU1UQmpNamt4WXpJM04yWQ=="
},
"RTCConfig": {
"AppId": "67d9266375d80e01b76f4108",
"AppKey": "",
"RoomId": "ChatRoom01",
"UserId": "Huoshan02",
"Token": "00167d9266375d80e01b76f4108SQCOTuMBk6DHaRPb0GkKAENoYXRSb29tMDEJAEh1b3NoYW4wMgYAAAAT29BpAQAT29BpAgAT29BpAwAT29BpBAAT29BpBQAT29BpIADq+UCvnjGv8j0Ay2eaLCUiSBG7oXmpGc6MdcdBYJA6CA=="
},
"VoiceChat": {
"AppId": "67d9266375d80e01b76f4108",
"RoomId": "ChatRoom01",
"TaskId": "ChatTask01",
"AgentConfig": {
"TargetUserId": ["Huoshan02"],
"WelcomeMessage": "你好,我是小乖,有什么需要帮忙的吗?",
"UserId": "Huoshan0",
"EnableConversationStateCallback": true
},
"Config": {
"ASRConfig": {
"Provider": "volcano",
"ProviderParams": {
"Mode": "smallmodel",
"AppId": "3340034205",
"Cluster": "volcengine_streaming_common"
}
},
"TTSConfig": {
"Provider": "volcano",
"ProviderParams": {
"app": {
"appid": "3340034205",
"cluster": "volcano_tts"
},
"audio": {
"voice_type": "BV001_streaming",
"speed_ratio": 1,
"pitch_ratio": 1,
"volume_ratio": 1
}
}
},
"LLMConfig": {
"Mode": "ArkV3",
"EndPointId": "ep-20250612105810-m6jh8",
"SystemMessages": [
"你是小乖,性格幽默又善解人意。你在表达时需简明扼要,有自己的观点。"
],
"VisionConfig": {
"Enable": false
},
"ThinkingType": "disabled"
},
"InterruptMode": 0
},
"AvatarConfig": {
"Enabled": false,
"AvatarType": "3min",
"AvatarRole": "250623-zhibo-linyunzhi",
"BackgroundUrl": "",
"VideoBitrate": 2000,
"AvatarAppID": "",
"AvatarToken": ""
},
"InterruptMode": 0
}
}

View File

@ -0,0 +1 @@
"""Pydantic 请求/响应模型"""

15
backend/schemas/chat.py Normal file
View File

@ -0,0 +1,15 @@
"""
聊天相关的请求模型
"""
from pydantic import BaseModel, Field
class ChatMessage(BaseModel):
role: str
content: str
class DebugChatRequest(BaseModel):
history: list[ChatMessage] = Field(default_factory=list)
question: str

View File

@ -0,0 +1 @@
"""Backend security package."""

View File

@ -4,19 +4,18 @@ SPDX-license-identifier: BSD-3-Clause
Migrated from Server/token.js
"""
import base64
import hashlib
import hmac
import random
import struct
import time
import base64
VERSION = "001"
VERSION_LENGTH = 3
APP_ID_LENGTH = 24
_random_nonce = random.randint(0, 0xFFFFFFFF)
privileges = {
"PrivPublishStream": 0,
"privPublishAudioStream": 1,
@ -50,8 +49,9 @@ class ByteBuf:
if not m:
self.put_uint16(0)
return self
self.put_uint16(len(m))
for key, value in m.items():
sorted_items = sorted(m.items(), key=lambda x: int(x[0]))
self.put_uint16(len(sorted_items))
for key, value in sorted_items:
self.put_uint16(int(key))
self.put_uint32(int(value))
return self
@ -71,7 +71,8 @@ class AccessToken:
self.room_id = room_id
self.user_id = user_id
self.issued_at = int(time.time())
self.nonce = _random_nonce
random.seed(time.time())
self.nonce = random.randint(1, 99999999)
self.expire_at = 0
self._privileges: dict = {}

View File

@ -5,6 +5,7 @@ SPDX-license-identifier: BSD-3-Clause
Migrated from @volcengine/openapi Signer (AWS SigV4 compatible)
Reference: https://www.volcengine.com/docs/6348/69828
"""
import hashlib
import hmac
import json
@ -21,7 +22,7 @@ def _hmac_sha256(key: bytes, data: str) -> bytes:
def _get_signing_key(secret_key: str, date_str: str, region: str, service: str) -> bytes:
k_date = _hmac_sha256(("HMAC-SHA256" + secret_key).encode("utf-8"), date_str)
k_date = _hmac_sha256(secret_key.encode("utf-8"), date_str)
k_region = _hmac_sha256(k_date, region)
k_service = _hmac_sha256(k_region, service)
k_signing = _hmac_sha256(k_service, "request")
@ -38,7 +39,7 @@ class Signer:
request_data: {
region: str,
method: str,
params: dict, # query params (Action, Version, ...)
params: dict,
headers: dict,
body: dict,
}
@ -52,10 +53,6 @@ class Signer:
self.service = service
def add_authorization(self, account_config: dict):
"""
Computes and injects Authorization + X-Date headers into self.headers.
account_config: { accessKeyId: str, secretKey: str }
"""
access_key = account_config["accessKeyId"]
secret_key = account_config["secretKey"]
@ -65,10 +62,11 @@ class Signer:
self.headers["X-Date"] = datetime_str
self.headers["X-Content-Sha256"] = _sha256_hex(
json.dumps(self.body, separators=(",", ":"), ensure_ascii=False).encode("utf-8")
json.dumps(self.body, separators=(",", ":"), ensure_ascii=False).encode(
"utf-8"
)
)
# Canonical headers: sorted lowercase header names
signed_header_names = sorted(k.lower() for k in self.headers)
canonical_headers = "".join(
f"{k}:{self.headers[next(h for h in self.headers if h.lower() == k)]}\n"
@ -76,34 +74,38 @@ class Signer:
)
signed_headers_str = ";".join(signed_header_names)
# Canonical query string
sorted_params = sorted(self.params.items())
canonical_qs = "&".join(
f"{quote(str(k), safe='')}={quote(str(v), safe='')}"
for k, v in sorted_params
)
# Canonical request
body_hash = self.headers["X-Content-Sha256"]
canonical_request = "\n".join([
self.method,
"/",
canonical_qs,
canonical_headers,
signed_headers_str,
body_hash,
])
canonical_request = "\n".join(
[
self.method,
"/",
canonical_qs,
canonical_headers,
signed_headers_str,
body_hash,
]
)
credential_scope = f"{date_str}/{self.region}/{self.service}/request"
string_to_sign = "\n".join([
"HMAC-SHA256",
datetime_str,
credential_scope,
_sha256_hex(canonical_request.encode("utf-8")),
])
string_to_sign = "\n".join(
[
"HMAC-SHA256",
datetime_str,
credential_scope,
_sha256_hex(canonical_request.encode("utf-8")),
]
)
signing_key = _get_signing_key(secret_key, date_str, self.region, self.service)
signature = hmac.new(signing_key, string_to_sign.encode("utf-8"), hashlib.sha256).hexdigest()
signature = hmac.new(
signing_key, string_to_sign.encode("utf-8"), hashlib.sha256
).hexdigest()
self.headers["Authorization"] = (
f"HMAC-SHA256 Credential={access_key}/{credential_scope}, "

34
backend/server.py Normal file
View File

@ -0,0 +1,34 @@
from pathlib import Path
from dotenv import load_dotenv
BASE_DIR = Path(__file__).parent
load_dotenv(BASE_DIR / ".env", override=False)
# 路由必须在 load_dotenv 之后导入,因为模块级代码会读取环境变量
from routes.chat_callback import router as chat_callback_router # noqa: E402
from routes.debug import router as debug_router # noqa: E402
from routes.proxy import router as proxy_router # noqa: E402
from routes.scenes import router as scenes_router # noqa: E402
from fastapi import FastAPI # noqa: E402
from fastapi.middleware.cors import CORSMiddleware # noqa: E402
app = FastAPI()
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_methods=["*"],
allow_headers=["*"],
)
app.include_router(proxy_router)
app.include_router(scenes_router)
app.include_router(chat_callback_router)
app.include_router(debug_router)
if __name__ == "__main__":
import uvicorn
uvicorn.run("server:app", host="0.0.0.0", port=3001, reload=True)

View File

@ -0,0 +1 @@
"""本地 LLM 与知识检索服务。"""

View File

@ -0,0 +1,139 @@
"""
本地 CustomLLM 服务
当前实现直接在同一个 FastAPI 进程内调用方舟 SDK
并由 /api/chat_callback 对外提供火山要求的 SSE 回调接口
"""
from __future__ import annotations
from dataclasses import dataclass
from typing import Any, Iterator
from utils.env import env_float, env_int, env_str
DEFAULT_ARK_BASE_URL = "https://ark.cn-beijing.volces.com/api/v3"
DEFAULT_ARK_TIMEOUT_SECONDS = 1800
DEFAULT_ARK_TEMPERATURE = 0
def _coalesce(*values):
for value in values:
if value is not None:
return value
return None
@dataclass(frozen=True)
class LocalLLMSettings:
api_key: str
endpoint_id: str
base_url: str
timeout_seconds: int
system_prompt: str
default_temperature: float
def _load_settings() -> LocalLLMSettings:
api_key = env_str("ARK_API_KEY")
endpoint_id = env_str("ARK_ENDPOINT_ID")
if not api_key:
raise ValueError("ARK_API_KEY 不能为空")
if not endpoint_id:
raise ValueError("ARK_ENDPOINT_ID 不能为空")
return LocalLLMSettings(
api_key=api_key,
endpoint_id=endpoint_id,
base_url=env_str("ARK_BASE_URL", DEFAULT_ARK_BASE_URL),
timeout_seconds=env_int("ARK_TIMEOUT_SECONDS", DEFAULT_ARK_TIMEOUT_SECONDS),
system_prompt=env_str(
"LOCAL_LLM_SYSTEM_PROMPT",
env_str("CUSTOM_LLM_SYSTEM_MESSAGE"),
),
default_temperature=env_float(
"LOCAL_LLM_TEMPERATURE", DEFAULT_ARK_TEMPERATURE
),
)
class LocalLLMService:
def __init__(self):
self._client = None
self._settings: LocalLLMSettings | None = None
@property
def settings(self) -> LocalLLMSettings:
if self._settings is None:
self._settings = _load_settings()
return self._settings
def _get_client(self):
if self._client is not None:
return self._client
try:
from volcenginesdkarkruntime import Ark
except ImportError as exc:
raise RuntimeError(
"未安装 volcenginesdkarkruntime请先执行 uv sync 安装依赖"
) from exc
s = self.settings
self._client = Ark(
base_url=s.base_url,
api_key=s.api_key,
timeout=s.timeout_seconds,
)
return self._client
def chat_stream(
self,
history_messages: list[dict[str, Any]],
rag_context: str = "",
request_options: dict[str, Any] | None = None,
) -> Iterator[Any]:
settings = self.settings
client = self._get_client()
request_options = request_options or {}
system_blocks = [settings.system_prompt]
if rag_context:
system_blocks.append(f"### 参考知识库(绝对准则)\n{rag_context.strip()}")
messages = [
{
"role": "system",
"content": "\n\n".join(block for block in system_blocks if block),
}
]
messages.extend(history_messages)
payload: dict[str, Any] = {
"model": settings.endpoint_id,
"messages": messages,
"temperature": _coalesce(
request_options.get("temperature"),
settings.default_temperature,
),
"stream": True,
"stream_options": {"include_usage": True},
}
if request_options.get("max_tokens") is not None:
payload["max_tokens"] = request_options["max_tokens"]
if request_options.get("top_p") is not None:
payload["top_p"] = request_options["top_p"]
print(f"🚀 发起流式调用 (Endpoint: {settings.endpoint_id})")
try:
stream = client.chat.completions.create(**payload)
for chunk in stream:
yield chunk
except Exception as exc:
print(f"❌ LLM 调用失败: {exc}")
raise
local_llm_service = LocalLLMService()

View File

@ -0,0 +1,30 @@
"""
最小可用的 RAG 服务占位实现
当前版本支持两种简单来源
- RAG_STATIC_CONTEXT直接写在环境变量中的固定知识
- RAG_CONTEXT_FILE读取本地文件全文作为知识上下文
后续如果要接真正的向量检索可以直接替换 retrieve 方法实现
"""
from __future__ import annotations
from pathlib import Path
from utils.env import env_str
class RagService:
async def retrieve(self, query: str) -> str:
_ = query
context_file = env_str("RAG_CONTEXT_FILE")
if context_file:
path = Path(context_file).expanduser()
if path.exists() and path.is_file():
return path.read_text(encoding="utf-8")
return env_str("RAG_STATIC_CONTEXT")
rag_service = RagService()

View File

@ -0,0 +1,99 @@
"""
场景配置加载与运行时准备
"""
import copy
import json
import time
import uuid
from pathlib import Path
from typing import Any
from fastapi import Request
from config.custom_scene import (
CUSTOM_SCENE_ID,
build_custom_scene_from_env,
build_llm_settings_from_env,
)
from security.rtc_token import AccessToken, privileges
from utils.validation import assert_scene_value, assert_token_generation_ready
BASE_DIR = Path(__file__).resolve().parent.parent
SCENES_DIR = BASE_DIR / "scenes"
def load_scenes() -> dict:
scenes = {
CUSTOM_SCENE_ID: build_custom_scene_from_env(),
}
for p in sorted(SCENES_DIR.glob("*.json")):
if p.stem == CUSTOM_SCENE_ID:
continue
with open(p, encoding="utf-8") as f:
scenes[p.stem] = json.load(f)
return scenes
def prepare_scene_runtime(scene_name: str, data: dict[str, Any]):
data = copy.deepcopy(data)
scene_config = data.get("SceneConfig", {})
rtc_config = data.get("RTCConfig", {})
voice_chat = data.get("VoiceChat", {})
app_id = rtc_config.get("AppId", "")
assert_scene_value(scene_name, "RTCConfig.AppId", app_id)
token = rtc_config.get("Token", "")
user_id = rtc_config.get("UserId", "")
room_id = rtc_config.get("RoomId", "")
app_key = rtc_config.get("AppKey", "")
if app_id and (not token or not user_id or not room_id):
rtc_config["RoomId"] = voice_chat["RoomId"] = room_id or str(uuid.uuid4())
rtc_config["UserId"] = user_id = user_id or str(uuid.uuid4())
agent_config = voice_chat.get("AgentConfig", {})
target_user_ids = agent_config.get("TargetUserId")
if target_user_ids:
target_user_ids[0] = rtc_config["UserId"]
assert_token_generation_ready(scene_name, app_key)
key = AccessToken(app_id, app_key, rtc_config["RoomId"], rtc_config["UserId"])
key.add_privilege(privileges["PrivSubscribeStream"], 0)
key.add_privilege(privileges["PrivPublishStream"], 0)
key.expire_time(int(time.time()) + 24 * 3600)
rtc_config["Token"] = key.serialize()
return scene_config, rtc_config, voice_chat
def get_custom_llm_callback_settings() -> dict[str, Any]:
missing: list[str] = []
settings = build_llm_settings_from_env(missing)
if missing:
missing_str = ", ".join(dict.fromkeys(missing))
raise ValueError(f"Custom 场景缺少以下环境变量: {missing_str}")
if settings["mode"] != "CustomLLM":
raise ValueError("当前 CUSTOM_LLM_MODE 不是 CustomLLM无法使用本地回调接口")
return settings
def ensure_custom_llm_authorized(request: Request, api_key: str):
if not api_key:
return
authorization = request.headers.get("authorization", "")
expected_value = f"Bearer {api_key}"
if authorization == expected_value:
return
raise PermissionError("自定义 LLM 回调鉴权失败,请检查 CUSTOM_LLM_API_KEY")
try:
Scenes = load_scenes()
except ValueError as exc:
raise RuntimeError(f"Custom 场景配置错误:{exc}") from exc

View File

134
backend/utils/env.py Normal file
View File

@ -0,0 +1,134 @@
"""
环境变量读取工具函数
"""
import json
import os
from typing import Any
TRUTHY_VALUES = {"1", "true", "yes", "on"}
FALSY_VALUES = {"0", "false", "no", "off"}
def env_str(name: str, default: str = "") -> str:
value = os.getenv(name)
if value is None:
return default
value = value.strip()
return value if value else default
def require_env(name: str, missing: list[str]) -> str:
value = env_str(name)
if not value:
missing.append(name)
return value
def env_bool(name: str, default: bool) -> bool:
raw_value = os.getenv(name)
if raw_value is None or not raw_value.strip():
return default
value = raw_value.strip().lower()
if value in TRUTHY_VALUES:
return True
if value in FALSY_VALUES:
return False
raise ValueError(f"{name} 必须是布尔值,可选 true/false/1/0")
def env_optional_bool(name: str) -> bool | None:
raw_value = os.getenv(name)
if raw_value is None or not raw_value.strip():
return None
return env_bool(name, False)
def env_int(name: str, default: int) -> int:
raw_value = os.getenv(name)
if raw_value is None or not raw_value.strip():
return default
try:
return int(raw_value.strip())
except ValueError as exc:
raise ValueError(f"{name} 必须是整数") from exc
def env_optional_int(name: str) -> int | None:
raw_value = os.getenv(name)
if raw_value is None or not raw_value.strip():
return None
return env_int(name, 0)
def env_float(name: str, default: float) -> float:
raw_value = os.getenv(name)
if raw_value is None or not raw_value.strip():
return default
try:
return float(raw_value.strip())
except ValueError as exc:
raise ValueError(f"{name} 必须是浮点数") from exc
def env_number(name: str, default: int | float) -> int | float:
raw_value = os.getenv(name)
if raw_value is None or not raw_value.strip():
return default
try:
value = float(raw_value.strip())
except ValueError as exc:
raise ValueError(f"{name} 必须是数字") from exc
if value.is_integer():
return int(value)
return value
def env_optional_number(name: str) -> int | float | None:
raw_value = os.getenv(name)
if raw_value is None or not raw_value.strip():
return None
return env_number(name, 0)
def env_json_object(name: str) -> dict[str, str]:
raw_value = os.getenv(name)
if raw_value is None or not raw_value.strip():
return {}
try:
value = json.loads(raw_value)
except json.JSONDecodeError as exc:
raise ValueError(f"{name} 必须是合法的 JSON 对象字符串") from exc
if not isinstance(value, dict):
raise ValueError(f"{name} 必须是 JSON 对象")
invalid_keys = [key for key in value.keys() if not isinstance(key, str)]
if invalid_keys:
raise ValueError(f"{name} 的所有键必须是字符串")
return {key: str(val) for key, val in value.items()}
def env_list(name: str) -> list[str]:
raw_value = os.getenv(name)
if raw_value is None or not raw_value.strip():
return []
return [item.strip() for item in raw_value.split(",") if item.strip()]
def set_if_present(target: dict[str, Any], key: str, value: Any):
if value is None:
return
if isinstance(value, str) and not value:
return
if isinstance(value, dict) and not value:
return
target[key] = value

View File

@ -0,0 +1,33 @@
"""
统一错误响应格式
"""
from fastapi.responses import JSONResponse
def error_response(action: str, message: str):
return JSONResponse(
{
"ResponseMetadata": {
"Action": action,
"Error": {"Code": -1, "Message": message},
}
}
)
def custom_llm_error_response(
message: str,
*,
code: str = "InvalidConfiguration",
status_code: int = 400,
):
return JSONResponse(
{
"Error": {
"Code": code,
"Message": message,
}
},
status_code=status_code,
)

View File

@ -0,0 +1,31 @@
"""
校验工具函数
"""
from typing import Any
from config.custom_scene import CUSTOM_SCENE_ID
def assert_value(value, msg: str):
if not value:
raise ValueError(msg)
def assert_scene_value(scene_name: str, field_name: str, value: Any):
if value:
return
raise ValueError(f"{scene_name} 场景的 {field_name} 不能为空")
def assert_token_generation_ready(scene_name: str, app_key: str):
if app_key:
return
if scene_name == CUSTOM_SCENE_ID:
raise ValueError(
"Custom 场景未提供 CUSTOM_RTC_TOKEN 时,必须配置 CUSTOM_RTC_APP_KEY 用于自动生成 Token"
)
raise ValueError(f"自动生成 Token 时,{scene_name} 场景的 RTCConfig.AppKey 不可为空")

164
backend/uv.lock generated
View File

@ -9,16 +9,20 @@ source = { virtual = "." }
dependencies = [
{ name = "fastapi" },
{ name = "httpx" },
{ name = "python-dotenv" },
{ name = "python-multipart" },
{ name = "uvicorn", extra = ["standard"] },
{ name = "volcengine-python-sdk", extra = ["ark"] },
]
[package.metadata]
requires-dist = [
{ name = "fastapi", specifier = ">=0.110.0" },
{ name = "httpx", specifier = ">=0.27.0" },
{ name = "python-dotenv", specifier = ">=1.2.2" },
{ name = "python-multipart", specifier = ">=0.0.9" },
{ name = "uvicorn", extras = ["standard"], specifier = ">=0.29.0" },
{ name = "volcengine-python-sdk", extras = ["ark"], specifier = ">=4.0.6" },
]
[package.metadata.requires-dev]
@ -63,6 +67,51 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/9a/3c/c17fb3ca2d9c3acff52e30b309f538586f9f5b9c9cf454f3845fc9af4881/certifi-2026.2.25-py3-none-any.whl", hash = "sha256:027692e4402ad994f1c42e52a4997a9763c646b73e4096e4d5d6db8af1d6f0fa", size = 153684 },
]
[[package]]
name = "cffi"
version = "2.0.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "pycparser", marker = "implementation_name != 'PyPy'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/eb/56/b1ba7935a17738ae8453301356628e8147c79dbb825bcbc73dc7401f9846/cffi-2.0.0.tar.gz", hash = "sha256:44d1b5909021139fe36001ae048dbdde8214afa20200eda0f64c068cac5d5529", size = 523588 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/4b/8d/a0a47a0c9e413a658623d014e91e74a50cdd2c423f7ccfd44086ef767f90/cffi-2.0.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:00bdf7acc5f795150faa6957054fbbca2439db2f775ce831222b66f192f03beb", size = 185230 },
{ url = "https://files.pythonhosted.org/packages/4a/d2/a6c0296814556c68ee32009d9c2ad4f85f2707cdecfd7727951ec228005d/cffi-2.0.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:45d5e886156860dc35862657e1494b9bae8dfa63bf56796f2fb56e1679fc0bca", size = 181043 },
{ url = "https://files.pythonhosted.org/packages/b0/1e/d22cc63332bd59b06481ceaac49d6c507598642e2230f201649058a7e704/cffi-2.0.0-cp313-cp313-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:07b271772c100085dd28b74fa0cd81c8fb1a3ba18b21e03d7c27f3436a10606b", size = 212446 },
{ url = "https://files.pythonhosted.org/packages/a9/f5/a2c23eb03b61a0b8747f211eb716446c826ad66818ddc7810cc2cc19b3f2/cffi-2.0.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:d48a880098c96020b02d5a1f7d9251308510ce8858940e6fa99ece33f610838b", size = 220101 },
{ url = "https://files.pythonhosted.org/packages/f2/7f/e6647792fc5850d634695bc0e6ab4111ae88e89981d35ac269956605feba/cffi-2.0.0-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:f93fd8e5c8c0a4aa1f424d6173f14a892044054871c771f8566e4008eaa359d2", size = 207948 },
{ url = "https://files.pythonhosted.org/packages/cb/1e/a5a1bd6f1fb30f22573f76533de12a00bf274abcdc55c8edab639078abb6/cffi-2.0.0-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:dd4f05f54a52fb558f1ba9f528228066954fee3ebe629fc1660d874d040ae5a3", size = 206422 },
{ url = "https://files.pythonhosted.org/packages/98/df/0a1755e750013a2081e863e7cd37e0cdd02664372c754e5560099eb7aa44/cffi-2.0.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:c8d3b5532fc71b7a77c09192b4a5a200ea992702734a2e9279a37f2478236f26", size = 219499 },
{ url = "https://files.pythonhosted.org/packages/50/e1/a969e687fcf9ea58e6e2a928ad5e2dd88cc12f6f0ab477e9971f2309b57c/cffi-2.0.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:d9b29c1f0ae438d5ee9acb31cadee00a58c46cc9c0b2f9038c6b0b3470877a8c", size = 222928 },
{ url = "https://files.pythonhosted.org/packages/36/54/0362578dd2c9e557a28ac77698ed67323ed5b9775ca9d3fe73fe191bb5d8/cffi-2.0.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:6d50360be4546678fc1b79ffe7a66265e28667840010348dd69a314145807a1b", size = 221302 },
{ url = "https://files.pythonhosted.org/packages/eb/6d/bf9bda840d5f1dfdbf0feca87fbdb64a918a69bca42cfa0ba7b137c48cb8/cffi-2.0.0-cp313-cp313-win32.whl", hash = "sha256:74a03b9698e198d47562765773b4a8309919089150a0bb17d829ad7b44b60d27", size = 172909 },
{ url = "https://files.pythonhosted.org/packages/37/18/6519e1ee6f5a1e579e04b9ddb6f1676c17368a7aba48299c3759bbc3c8b3/cffi-2.0.0-cp313-cp313-win_amd64.whl", hash = "sha256:19f705ada2530c1167abacb171925dd886168931e0a7b78f5bffcae5c6b5be75", size = 183402 },
{ url = "https://files.pythonhosted.org/packages/cb/0e/02ceeec9a7d6ee63bb596121c2c8e9b3a9e150936f4fbef6ca1943e6137c/cffi-2.0.0-cp313-cp313-win_arm64.whl", hash = "sha256:256f80b80ca3853f90c21b23ee78cd008713787b1b1e93eae9f3d6a7134abd91", size = 177780 },
{ url = "https://files.pythonhosted.org/packages/92/c4/3ce07396253a83250ee98564f8d7e9789fab8e58858f35d07a9a2c78de9f/cffi-2.0.0-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:fc33c5141b55ed366cfaad382df24fe7dcbc686de5be719b207bb248e3053dc5", size = 185320 },
{ url = "https://files.pythonhosted.org/packages/59/dd/27e9fa567a23931c838c6b02d0764611c62290062a6d4e8ff7863daf9730/cffi-2.0.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:c654de545946e0db659b3400168c9ad31b5d29593291482c43e3564effbcee13", size = 181487 },
{ url = "https://files.pythonhosted.org/packages/d6/43/0e822876f87ea8a4ef95442c3d766a06a51fc5298823f884ef87aaad168c/cffi-2.0.0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:24b6f81f1983e6df8db3adc38562c83f7d4a0c36162885ec7f7b77c7dcbec97b", size = 220049 },
{ url = "https://files.pythonhosted.org/packages/b4/89/76799151d9c2d2d1ead63c2429da9ea9d7aac304603de0c6e8764e6e8e70/cffi-2.0.0-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:12873ca6cb9b0f0d3a0da705d6086fe911591737a59f28b7936bdfed27c0d47c", size = 207793 },
{ url = "https://files.pythonhosted.org/packages/bb/dd/3465b14bb9e24ee24cb88c9e3730f6de63111fffe513492bf8c808a3547e/cffi-2.0.0-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:d9b97165e8aed9272a6bb17c01e3cc5871a594a446ebedc996e2397a1c1ea8ef", size = 206300 },
{ url = "https://files.pythonhosted.org/packages/47/d9/d83e293854571c877a92da46fdec39158f8d7e68da75bf73581225d28e90/cffi-2.0.0-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:afb8db5439b81cf9c9d0c80404b60c3cc9c3add93e114dcae767f1477cb53775", size = 219244 },
{ url = "https://files.pythonhosted.org/packages/2b/0f/1f177e3683aead2bb00f7679a16451d302c436b5cbf2505f0ea8146ef59e/cffi-2.0.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:737fe7d37e1a1bffe70bd5754ea763a62a066dc5913ca57e957824b72a85e205", size = 222828 },
{ url = "https://files.pythonhosted.org/packages/c6/0f/cafacebd4b040e3119dcb32fed8bdef8dfe94da653155f9d0b9dc660166e/cffi-2.0.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:38100abb9d1b1435bc4cc340bb4489635dc2f0da7456590877030c9b3d40b0c1", size = 220926 },
{ url = "https://files.pythonhosted.org/packages/3e/aa/df335faa45b395396fcbc03de2dfcab242cd61a9900e914fe682a59170b1/cffi-2.0.0-cp314-cp314-win32.whl", hash = "sha256:087067fa8953339c723661eda6b54bc98c5625757ea62e95eb4898ad5e776e9f", size = 175328 },
{ url = "https://files.pythonhosted.org/packages/bb/92/882c2d30831744296ce713f0feb4c1cd30f346ef747b530b5318715cc367/cffi-2.0.0-cp314-cp314-win_amd64.whl", hash = "sha256:203a48d1fb583fc7d78a4c6655692963b860a417c0528492a6bc21f1aaefab25", size = 185650 },
{ url = "https://files.pythonhosted.org/packages/9f/2c/98ece204b9d35a7366b5b2c6539c350313ca13932143e79dc133ba757104/cffi-2.0.0-cp314-cp314-win_arm64.whl", hash = "sha256:dbd5c7a25a7cb98f5ca55d258b103a2054f859a46ae11aaf23134f9cc0d356ad", size = 180687 },
{ url = "https://files.pythonhosted.org/packages/3e/61/c768e4d548bfa607abcda77423448df8c471f25dbe64fb2ef6d555eae006/cffi-2.0.0-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:9a67fc9e8eb39039280526379fb3a70023d77caec1852002b4da7e8b270c4dd9", size = 188773 },
{ url = "https://files.pythonhosted.org/packages/2c/ea/5f76bce7cf6fcd0ab1a1058b5af899bfbef198bea4d5686da88471ea0336/cffi-2.0.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:7a66c7204d8869299919db4d5069a82f1561581af12b11b3c9f48c584eb8743d", size = 185013 },
{ url = "https://files.pythonhosted.org/packages/be/b4/c56878d0d1755cf9caa54ba71e5d049479c52f9e4afc230f06822162ab2f/cffi-2.0.0-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:7cc09976e8b56f8cebd752f7113ad07752461f48a58cbba644139015ac24954c", size = 221593 },
{ url = "https://files.pythonhosted.org/packages/e0/0d/eb704606dfe8033e7128df5e90fee946bbcb64a04fcdaa97321309004000/cffi-2.0.0-cp314-cp314t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:92b68146a71df78564e4ef48af17551a5ddd142e5190cdf2c5624d0c3ff5b2e8", size = 209354 },
{ url = "https://files.pythonhosted.org/packages/d8/19/3c435d727b368ca475fb8742ab97c9cb13a0de600ce86f62eab7fa3eea60/cffi-2.0.0-cp314-cp314t-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:b1e74d11748e7e98e2f426ab176d4ed720a64412b6a15054378afdb71e0f37dc", size = 208480 },
{ url = "https://files.pythonhosted.org/packages/d0/44/681604464ed9541673e486521497406fadcc15b5217c3e326b061696899a/cffi-2.0.0-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:28a3a209b96630bca57cce802da70c266eb08c6e97e5afd61a75611ee6c64592", size = 221584 },
{ url = "https://files.pythonhosted.org/packages/25/8e/342a504ff018a2825d395d44d63a767dd8ebc927ebda557fecdaca3ac33a/cffi-2.0.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:7553fb2090d71822f02c629afe6042c299edf91ba1bf94951165613553984512", size = 224443 },
{ url = "https://files.pythonhosted.org/packages/e1/5e/b666bacbbc60fbf415ba9988324a132c9a7a0448a9a8f125074671c0f2c3/cffi-2.0.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:6c6c373cfc5c83a975506110d17457138c8c63016b563cc9ed6e056a82f13ce4", size = 223437 },
{ url = "https://files.pythonhosted.org/packages/a0/1d/ec1a60bd1a10daa292d3cd6bb0b359a81607154fb8165f3ec95fe003b85c/cffi-2.0.0-cp314-cp314t-win32.whl", hash = "sha256:1fc9ea04857caf665289b7a75923f2c6ed559b8298a1b8c49e59f7dd95c8481e", size = 180487 },
{ url = "https://files.pythonhosted.org/packages/bf/41/4c1168c74fac325c0c8156f04b6749c8b6a8f405bbf91413ba088359f60d/cffi-2.0.0-cp314-cp314t-win_amd64.whl", hash = "sha256:d68b6cef7827e8641e8ef16f4494edda8b36104d79773a334beaa1e3521430f6", size = 191726 },
{ url = "https://files.pythonhosted.org/packages/ae/3a/dbeec9d1ee0844c679f6bb5d6ad4e9f198b1224f4e7a32825f47f6192b0c/cffi-2.0.0-cp314-cp314t-win_arm64.whl", hash = "sha256:0a1527a803f0a659de1af2e1fd700213caba79377e27e4693648c2923da066f9", size = 184195 },
]
[[package]]
name = "click"
version = "8.3.1"
@ -84,6 +133,59 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/d1/d6/3965ed04c63042e047cb6a3e6ed1a63a35087b6a609aa3a15ed8ac56c221/colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6", size = 25335 },
]
[[package]]
name = "cryptography"
version = "46.0.6"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "cffi", marker = "platform_python_implementation != 'PyPy'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/a4/ba/04b1bd4218cbc58dc90ce967106d51582371b898690f3ae0402876cc4f34/cryptography-46.0.6.tar.gz", hash = "sha256:27550628a518c5c6c903d84f637fbecf287f6cb9ced3804838a1295dc1fd0759", size = 750542 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/47/23/9285e15e3bc57325b0a72e592921983a701efc1ee8f91c06c5f0235d86d9/cryptography-46.0.6-cp311-abi3-macosx_10_9_universal2.whl", hash = "sha256:64235194bad039a10bb6d2d930ab3323baaec67e2ce36215fd0952fad0930ca8", size = 7176401 },
{ url = "https://files.pythonhosted.org/packages/60/f8/e61f8f13950ab6195b31913b42d39f0f9afc7d93f76710f299b5ec286ae6/cryptography-46.0.6-cp311-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:26031f1e5ca62fcb9d1fcb34b2b60b390d1aacaa15dc8b895a9ed00968b97b30", size = 4275275 },
{ url = "https://files.pythonhosted.org/packages/19/69/732a736d12c2631e140be2348b4ad3d226302df63ef64d30dfdb8db7ad1c/cryptography-46.0.6-cp311-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:9a693028b9cbe51b5a1136232ee8f2bc242e4e19d456ded3fa7c86e43c713b4a", size = 4425320 },
{ url = "https://files.pythonhosted.org/packages/d4/12/123be7292674abf76b21ac1fc0e1af50661f0e5b8f0ec8285faac18eb99e/cryptography-46.0.6-cp311-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:67177e8a9f421aa2d3a170c3e56eca4e0128883cf52a071a7cbf53297f18b175", size = 4278082 },
{ url = "https://files.pythonhosted.org/packages/5b/ba/d5e27f8d68c24951b0a484924a84c7cdaed7502bac9f18601cd357f8b1d2/cryptography-46.0.6-cp311-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:d9528b535a6c4f8ff37847144b8986a9a143585f0540fbcb1a98115b543aa463", size = 4926514 },
{ url = "https://files.pythonhosted.org/packages/34/71/1ea5a7352ae516d5512d17babe7e1b87d9db5150b21f794b1377eac1edc0/cryptography-46.0.6-cp311-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:22259338084d6ae497a19bae5d4c66b7ca1387d3264d1c2c0e72d9e9b6a77b97", size = 4457766 },
{ url = "https://files.pythonhosted.org/packages/01/59/562be1e653accee4fdad92c7a2e88fced26b3fdfce144047519bbebc299e/cryptography-46.0.6-cp311-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:760997a4b950ff00d418398ad73fbc91aa2894b5c1db7ccb45b4f68b42a63b3c", size = 3986535 },
{ url = "https://files.pythonhosted.org/packages/d6/8b/b1ebfeb788bf4624d36e45ed2662b8bd43a05ff62157093c1539c1288a18/cryptography-46.0.6-cp311-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:3dfa6567f2e9e4c5dceb8ccb5a708158a2a871052fa75c8b78cb0977063f1507", size = 4277618 },
{ url = "https://files.pythonhosted.org/packages/dd/52/a005f8eabdb28df57c20f84c44d397a755782d6ff6d455f05baa2785bd91/cryptography-46.0.6-cp311-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:cdcd3edcbc5d55757e5f5f3d330dd00007ae463a7e7aa5bf132d1f22a4b62b19", size = 4890802 },
{ url = "https://files.pythonhosted.org/packages/ec/4d/8e7d7245c79c617d08724e2efa397737715ca0ec830ecb3c91e547302555/cryptography-46.0.6-cp311-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:d4e4aadb7fc1f88687f47ca20bb7227981b03afaae69287029da08096853b738", size = 4457425 },
{ url = "https://files.pythonhosted.org/packages/1d/5c/f6c3596a1430cec6f949085f0e1a970638d76f81c3ea56d93d564d04c340/cryptography-46.0.6-cp311-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:2b417edbe8877cda9022dde3a008e2deb50be9c407eef034aeeb3a8b11d9db3c", size = 4405530 },
{ url = "https://files.pythonhosted.org/packages/7e/c9/9f9cea13ee2dbde070424e0c4f621c091a91ffcc504ffea5e74f0e1daeff/cryptography-46.0.6-cp311-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:380343e0653b1c9d7e1f55b52aaa2dbb2fdf2730088d48c43ca1c7c0abb7cc2f", size = 4667896 },
{ url = "https://files.pythonhosted.org/packages/ad/b5/1895bc0821226f129bc74d00eccfc6a5969e2028f8617c09790bf89c185e/cryptography-46.0.6-cp311-abi3-win32.whl", hash = "sha256:bcb87663e1f7b075e48c3be3ecb5f0b46c8fc50b50a97cf264e7f60242dca3f2", size = 3026348 },
{ url = "https://files.pythonhosted.org/packages/c3/f8/c9bcbf0d3e6ad288b9d9aa0b1dee04b063d19e8c4f871855a03ab3a297ab/cryptography-46.0.6-cp311-abi3-win_amd64.whl", hash = "sha256:6739d56300662c468fddb0e5e291f9b4d084bead381667b9e654c7dd81705124", size = 3483896 },
{ url = "https://files.pythonhosted.org/packages/01/41/3a578f7fd5c70611c0aacba52cd13cb364a5dee895a5c1d467208a9380b0/cryptography-46.0.6-cp314-cp314t-macosx_10_9_universal2.whl", hash = "sha256:2ef9e69886cbb137c2aef9772c2e7138dc581fad4fcbcf13cc181eb5a3ab6275", size = 7117147 },
{ url = "https://files.pythonhosted.org/packages/fa/87/887f35a6fca9dde90cad08e0de0c89263a8e59b2d2ff904fd9fcd8025b6f/cryptography-46.0.6-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:7f417f034f91dcec1cb6c5c35b07cdbb2ef262557f701b4ecd803ee8cefed4f4", size = 4266221 },
{ url = "https://files.pythonhosted.org/packages/aa/a8/0a90c4f0b0871e0e3d1ed126aed101328a8a57fd9fd17f00fb67e82a51ca/cryptography-46.0.6-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:d24c13369e856b94892a89ddf70b332e0b70ad4a5c43cf3e9cb71d6d7ffa1f7b", size = 4408952 },
{ url = "https://files.pythonhosted.org/packages/16/0b/b239701eb946523e4e9f329336e4ff32b1247e109cbab32d1a7b61da8ed7/cryptography-46.0.6-cp314-cp314t-manylinux_2_28_aarch64.whl", hash = "sha256:aad75154a7ac9039936d50cf431719a2f8d4ed3d3c277ac03f3339ded1a5e707", size = 4270141 },
{ url = "https://files.pythonhosted.org/packages/0f/a8/976acdd4f0f30df7b25605f4b9d3d89295351665c2091d18224f7ad5cdbf/cryptography-46.0.6-cp314-cp314t-manylinux_2_28_ppc64le.whl", hash = "sha256:3c21d92ed15e9cfc6eb64c1f5a0326db22ca9c2566ca46d845119b45b4400361", size = 4904178 },
{ url = "https://files.pythonhosted.org/packages/b1/1b/bf0e01a88efd0e59679b69f42d4afd5bced8700bb5e80617b2d63a3741af/cryptography-46.0.6-cp314-cp314t-manylinux_2_28_x86_64.whl", hash = "sha256:4668298aef7cddeaf5c6ecc244c2302a2b8e40f384255505c22875eebb47888b", size = 4441812 },
{ url = "https://files.pythonhosted.org/packages/bb/8b/11df86de2ea389c65aa1806f331cae145f2ed18011f30234cc10ca253de8/cryptography-46.0.6-cp314-cp314t-manylinux_2_31_armv7l.whl", hash = "sha256:8ce35b77aaf02f3b59c90b2c8a05c73bac12cea5b4e8f3fbece1f5fddea5f0ca", size = 3963923 },
{ url = "https://files.pythonhosted.org/packages/91/e0/207fb177c3a9ef6a8108f234208c3e9e76a6aa8cf20d51932916bd43bda0/cryptography-46.0.6-cp314-cp314t-manylinux_2_34_aarch64.whl", hash = "sha256:c89eb37fae9216985d8734c1afd172ba4927f5a05cfd9bf0e4863c6d5465b013", size = 4269695 },
{ url = "https://files.pythonhosted.org/packages/21/5e/19f3260ed1e95bced52ace7501fabcd266df67077eeb382b79c81729d2d3/cryptography-46.0.6-cp314-cp314t-manylinux_2_34_ppc64le.whl", hash = "sha256:ed418c37d095aeddf5336898a132fba01091f0ac5844e3e8018506f014b6d2c4", size = 4869785 },
{ url = "https://files.pythonhosted.org/packages/10/38/cd7864d79aa1d92ef6f1a584281433419b955ad5a5ba8d1eb6c872165bcb/cryptography-46.0.6-cp314-cp314t-manylinux_2_34_x86_64.whl", hash = "sha256:69cf0056d6947edc6e6760e5f17afe4bea06b56a9ac8a06de9d2bd6b532d4f3a", size = 4441404 },
{ url = "https://files.pythonhosted.org/packages/09/0a/4fe7a8d25fed74419f91835cf5829ade6408fd1963c9eae9c4bce390ecbb/cryptography-46.0.6-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:8e7304c4f4e9490e11efe56af6713983460ee0780f16c63f219984dab3af9d2d", size = 4397549 },
{ url = "https://files.pythonhosted.org/packages/5f/a0/7d738944eac6513cd60a8da98b65951f4a3b279b93479a7e8926d9cd730b/cryptography-46.0.6-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:b928a3ca837c77a10e81a814a693f2295200adb3352395fad024559b7be7a736", size = 4651874 },
{ url = "https://files.pythonhosted.org/packages/cb/f1/c2326781ca05208845efca38bf714f76939ae446cd492d7613808badedf1/cryptography-46.0.6-cp314-cp314t-win32.whl", hash = "sha256:97c8115b27e19e592a05c45d0dd89c57f81f841cc9880e353e0d3bf25b2139ed", size = 3001511 },
{ url = "https://files.pythonhosted.org/packages/c9/57/fe4a23eb549ac9d903bd4698ffda13383808ef0876cc912bcb2838799ece/cryptography-46.0.6-cp314-cp314t-win_amd64.whl", hash = "sha256:c797e2517cb7880f8297e2c0f43bb910e91381339336f75d2c1c2cbf811b70b4", size = 3471692 },
{ url = "https://files.pythonhosted.org/packages/c4/cc/f330e982852403da79008552de9906804568ae9230da8432f7496ce02b71/cryptography-46.0.6-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:12cae594e9473bca1a7aceb90536060643128bb274fcea0fc459ab90f7d1ae7a", size = 7162776 },
{ url = "https://files.pythonhosted.org/packages/49/b3/dc27efd8dcc4bff583b3f01d4a3943cd8b5821777a58b3a6a5f054d61b79/cryptography-46.0.6-cp38-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:639301950939d844a9e1c4464d7e07f902fe9a7f6b215bb0d4f28584729935d8", size = 4270529 },
{ url = "https://files.pythonhosted.org/packages/e6/05/e8d0e6eb4f0d83365b3cb0e00eb3c484f7348db0266652ccd84632a3d58d/cryptography-46.0.6-cp38-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:ed3775295fb91f70b4027aeba878d79b3e55c0b3e97eaa4de71f8f23a9f2eb77", size = 4414827 },
{ url = "https://files.pythonhosted.org/packages/2f/97/daba0f5d2dc6d855e2dcb70733c812558a7977a55dd4a6722756628c44d1/cryptography-46.0.6-cp38-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:8927ccfbe967c7df312ade694f987e7e9e22b2425976ddbf28271d7e58845290", size = 4271265 },
{ url = "https://files.pythonhosted.org/packages/89/06/fe1fce39a37ac452e58d04b43b0855261dac320a2ebf8f5260dd55b201a9/cryptography-46.0.6-cp38-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:b12c6b1e1651e42ab5de8b1e00dc3b6354fdfd778e7fa60541ddacc27cd21410", size = 4916800 },
{ url = "https://files.pythonhosted.org/packages/ff/8a/b14f3101fe9c3592603339eb5d94046c3ce5f7fc76d6512a2d40efd9724e/cryptography-46.0.6-cp38-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:063b67749f338ca9c5a0b7fe438a52c25f9526b851e24e6c9310e7195aad3b4d", size = 4448771 },
{ url = "https://files.pythonhosted.org/packages/01/b3/0796998056a66d1973fd52ee89dc1bb3b6581960a91ad4ac705f182d398f/cryptography-46.0.6-cp38-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:02fad249cb0e090b574e30b276a3da6a149e04ee2f049725b1f69e7b8351ec70", size = 3978333 },
{ url = "https://files.pythonhosted.org/packages/c5/3d/db200af5a4ffd08918cd55c08399dc6c9c50b0bc72c00a3246e099d3a849/cryptography-46.0.6-cp38-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:7e6142674f2a9291463e5e150090b95a8519b2fb6e6aaec8917dd8d094ce750d", size = 4271069 },
{ url = "https://files.pythonhosted.org/packages/d7/18/61acfd5b414309d74ee838be321c636fe71815436f53c9f0334bf19064fa/cryptography-46.0.6-cp38-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:456b3215172aeefb9284550b162801d62f5f264a081049a3e94307fe20792cfa", size = 4878358 },
{ url = "https://files.pythonhosted.org/packages/8b/65/5bf43286d566f8171917cae23ac6add941654ccf085d739195a4eacf1674/cryptography-46.0.6-cp38-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:341359d6c9e68834e204ceaf25936dffeafea3829ab80e9503860dcc4f4dac58", size = 4448061 },
{ url = "https://files.pythonhosted.org/packages/e0/25/7e49c0fa7205cf3597e525d156a6bce5b5c9de1fd7e8cb01120e459f205a/cryptography-46.0.6-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:9a9c42a2723999a710445bc0d974e345c32adfd8d2fac6d8a251fa829ad31cfb", size = 4399103 },
{ url = "https://files.pythonhosted.org/packages/44/46/466269e833f1c4718d6cd496ffe20c56c9c8d013486ff66b4f69c302a68d/cryptography-46.0.6-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:6617f67b1606dfd9fe4dbfa354a9508d4a6d37afe30306fe6c101b7ce3274b72", size = 4659255 },
{ url = "https://files.pythonhosted.org/packages/0a/09/ddc5f630cc32287d2c953fc5d32705e63ec73e37308e5120955316f53827/cryptography-46.0.6-cp38-abi3-win32.whl", hash = "sha256:7f6690b6c55e9c5332c0b59b9c8a3fb232ebf059094c17f9019a51e9827df91c", size = 3010660 },
{ url = "https://files.pythonhosted.org/packages/1b/82/ca4893968aeb2709aacfb57a30dec6fa2ab25b10fa9f064b8882ce33f599/cryptography-46.0.6-cp38-abi3-win_amd64.whl", hash = "sha256:79e865c642cfc5c0b3eb12af83c35c5aeff4fa5c672dc28c43721c2c9fdd2f0f", size = 3471160 },
]
[[package]]
name = "fastapi"
version = "0.135.2"
@ -168,6 +270,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/0e/61/66938bbb5fc52dbdf84594873d5b51fb1f7c7794e9c0f5bd885f30bc507b/idna-3.11-py3-none-any.whl", hash = "sha256:771a87f49d9defaf64091e6e6fe9c18d4833f140bd19464795bc32d966ca37ea", size = 71008 },
]
[[package]]
name = "pycparser"
version = "3.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/1b/7d/92392ff7815c21062bea51aa7b87d45576f649f16458d78b7cf94b9ab2e6/pycparser-3.0.tar.gz", hash = "sha256:600f49d217304a5902ac3c37e1281c9fe94e4d0489de643a9504c5cdfdfc6b29", size = 103492 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/0c/c3/44f3fbbfa403ea2a7c779186dc20772604442dde72947e7d01069cbe98e3/pycparser-3.0-py3-none-any.whl", hash = "sha256:b727414169a36b7d524c1c3e31839a521725078d7b2ff038656844266160a992", size = 48172 },
]
[[package]]
name = "pydantic"
version = "2.12.5"
@ -236,6 +347,18 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/9f/ed/068e41660b832bb0b1aa5b58011dea2a3fe0ba7861ff38c4d4904c1c1a99/pydantic_core-2.41.5-cp314-cp314t-win_arm64.whl", hash = "sha256:35b44f37a3199f771c3eaa53051bc8a70cd7b54f333531c59e29fd4db5d15008", size = 1974769 },
]
[[package]]
name = "python-dateutil"
version = "2.9.0.post0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "six" },
]
sdist = { url = "https://files.pythonhosted.org/packages/66/c0/0c8b6ad9f17a802ee498c46e004a0eb49bc148f2fd230864601a86dcf6db/python-dateutil-2.9.0.post0.tar.gz", hash = "sha256:37dd54208da7e1cd875388217d5e00ebd4179249f90fb72437e91a35459a0ad3", size = 342432 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/ec/57/56b9bcc3c9c6a792fcbaf139543cee77261f3651ca9da0c93f5c1221264b/python_dateutil-2.9.0.post0-py2.py3-none-any.whl", hash = "sha256:a8b2bc7bffae282281c8140a97d3aa9c14da0b136dfe83f850eea9a5f7470427", size = 229892 },
]
[[package]]
name = "python-dotenv"
version = "1.2.2"
@ -290,6 +413,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/f1/12/de94a39c2ef588c7e6455cfbe7343d3b2dc9d6b6b2f40c4c6565744c873d/pyyaml-6.0.3-cp314-cp314t-win_arm64.whl", hash = "sha256:ebc55a14a21cb14062aa4162f906cd962b28e2e9ea38f9b4391244cd8de4ae0b", size = 149341 },
]
[[package]]
name = "six"
version = "1.17.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/94/e7/b2c673351809dca68a0e064b6af791aa332cf192da575fd474ed7d6f16a2/six-1.17.0.tar.gz", hash = "sha256:ff70335d468e7eb6ec65b95b99d3a2836546063f63acc5171de367e834932a81", size = 34031 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/b7/ce/149a00dd41f10bc29e5921b496af8b574d8413afcd5e30dfa0ed46c2cc5e/six-1.17.0-py2.py3-none-any.whl", hash = "sha256:4721f391ed90541fddacab5acf947aa0d3dc7d27b2e1e8eda2be8970586c3274", size = 11050 },
]
[[package]]
name = "starlette"
version = "1.0.0"
@ -323,6 +455,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/dc/9b/47798a6c91d8bdb567fe2698fe81e0c6b7cb7ef4d13da4114b41d239f65d/typing_inspection-0.4.2-py3-none-any.whl", hash = "sha256:4ed1cacbdc298c220f1bd249ed5287caa16f34d44ef4e9c3d0cbad5b521545e7", size = 14611 },
]
[[package]]
name = "urllib3"
version = "2.6.3"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/c7/24/5f1b3bdffd70275f6661c76461e25f024d5a38a46f04aaca912426a2b1d3/urllib3-2.6.3.tar.gz", hash = "sha256:1b62b6884944a57dbe321509ab94fd4d3b307075e0c2eae991ac71ee15ad38ed", size = 435556 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/39/08/aaaad47bc4e9dc8c725e68f9d04865dbcb2052843ff09c97b08904852d84/urllib3-2.6.3-py3-none-any.whl", hash = "sha256:bf272323e553dfb2e87d9bfd225ca7b0f467b919d7bbd355436d3fd37cb0acd4", size = 131584 },
]
[[package]]
name = "uvicorn"
version = "0.42.0"
@ -373,6 +514,29 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/e4/16/c1fd27e9549f3c4baf1dc9c20c456cd2f822dbf8de9f463824b0c0357e06/uvloop-0.22.1-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:6cde23eeda1a25c75b2e07d39970f3374105d5eafbaab2a4482be82f272d5a5e", size = 4296730 },
]
[[package]]
name = "volcengine-python-sdk"
version = "5.0.21"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "certifi" },
{ name = "python-dateutil" },
{ name = "six" },
{ name = "urllib3" },
]
sdist = { url = "https://files.pythonhosted.org/packages/0a/5e/1d1c1ffc27cd552df55bd75e43b5160d5a5224c2459f8cafe04982921340/volcengine_python_sdk-5.0.21.tar.gz", hash = "sha256:324eded08082fcc65c55c304aca62f5bf1bc5dc472d8ed0bb9b50bdffb768a9b", size = 8283723 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/41/0e/4fdb25614ba83c42135b5eeb5d182ac22493bd7c221dd3477090c3c15897/volcengine_python_sdk-5.0.21-py2.py3-none-any.whl", hash = "sha256:a478bdf3036d8b2e42c19b04c9b708018316a4daac8d8cd58b882074e9d03546", size = 32571307 },
]
[package.optional-dependencies]
ark = [
{ name = "anyio" },
{ name = "cryptography" },
{ name = "httpx" },
{ name = "pydantic" },
]
[[package]]
name = "watchfiles"
version = "1.1.1"