Skip to content

fix: AstrBot 在调用 PPIO 平台的 LLM 模型时,出现 400 BadRequestError 上下文超长错误,导致所有模型 fallback 失败后进入 AgentState.ERROR 状态。#7888

Open
leonforcode wants to merge 5 commits intoAstrBotDevs:masterfrom
leonforcode:master

Conversation

@leonforcode
Copy link
Copy Markdown
Contributor

@leonforcode leonforcode commented Apr 29, 2026

修复 PPIO 平台上下文长度错误检测失效的问题

AstrBot 在调用 PPIO 平台的 LLM 模型时,出现 400 BadRequestError 错误(The input is longer than the model's context length),导致所有模型 fallback 失败后进入 AgentState.ERROR 状态。这是因为 _handle_api_error 方法中的上下文长度检测逻辑只匹配 "maximum context length",无法识别 PPIO 平台的错误消息格式。

Modifications / 改动点

修改文件:

  • astrbot/core/provider/sources/openai_source.py(第 1072 行)

改动内容:

  • 扩展上下文长度错误检测逻辑,新增对 "context length" 的匹配(不区分大小写)
  • 保留原有的 "maximum context length" 匹配,确保向后兼容(OpenAI、Azure 等平台)
  • 使用 .lower() 确保不区分大小写,增强鲁棒性

修复前:

if "maximum context length" in str(e):

修复后:

if "maximum context length" in str(e) or "context length" in str(e).lower():

影响范围:

  • 修复 PPIO 平台模型(如 ppio/zai-org/glm-5-turboppio/minimax/minimax-m2.7 等)的上下文长度超限错误处理

  • 不影响其他平台的现有功能

  • This is NOT a breaking change. / 这不是一个破坏性变更。

Screenshots or Test Results / 运行截图或测试结果

修复后,当 PPIO 平台返回 The input is longer than the model's context length 错误时:

  1. 正确识别为上下文长度超限错误
  2. 调用 self.pop_record(context_query) 弹出最早的记录
  3. 重试请求,而不会直接抛出异常
  4. 避免级联 fallback 失败

Summary by Sourcery

Bug Fixes:

  • Handle PPIO platform context-length error messages by broadening the matching logic for context length errors so requests can be retried instead of causing agent errors.

leonforcode and others added 5 commits March 13, 2026 18:08
…bility

- Extend error detection to handle PPIO's error message format:
  'The input is longer than the model's context length'
- Add case-insensitive matching using .lower() for robustness
- Maintain backward compatibility with existing 'maximum context length' check

This fixes the issue where PPIO platform models (e.g., ppio/zai-org/glm-5-turbo)
would fail with AgentState.ERROR due to unrecognized context length errors.
@auto-assign auto-assign Bot requested review from Raven95676 and anka-afk April 29, 2026 07:15
@dosubot dosubot Bot added size:XS This PR changes 0-9 lines, ignoring generated files. area:provider The bug / feature is about AI Provider, Models, LLM Agent, LLM Agent Runner. labels Apr 29, 2026
Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've left some high level feedback:

  • Consider normalizing the exception message once (e.g., msg = str(e).lower()) and reusing it in the condition to avoid calling str(e) twice and to keep the logic clearer.
  • The new "context length" substring is quite generic; if possible, narrow the match (for example by anchoring to a phrase like "model's context length") to reduce the risk of misclassifying unrelated errors that mention context length.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Consider normalizing the exception message once (e.g., `msg = str(e).lower()`) and reusing it in the condition to avoid calling `str(e)` twice and to keep the logic clearer.
- The new `"context length"` substring is quite generic; if possible, narrow the match (for example by anchoring to a phrase like `"model's context length"`) to reduce the risk of misclassifying unrelated errors that mention context length.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the error handling logic in the OpenAI source provider to catch context length errors more effectively by adding a case-insensitive check for the string 'context length'. A review comment points out that the new case-insensitive check makes the original specific string check redundant and suggests simplifying the condition for better readability and consistency.

)
raise e
if "maximum context length" in str(e):
if "maximum context length" in str(e) or "context length" in str(e).lower():
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

这里的逻辑存在冗余。"context length" in str(e).lower() 已经完全涵盖了 "maximum context length" in str(e) 的情况(包括各种大小写组合)。建议简化为统一的小写匹配,这不仅能保持对原有错误消息的兼容,也使代码更简洁,并与该函数后续处理其他错误(如第 1127-1129 行)的风格保持一致。

Suggested change
if "maximum context length" in str(e) or "context length" in str(e).lower():
if "context length" in str(e).lower():

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:provider The bug / feature is about AI Provider, Models, LLM Agent, LLM Agent Runner. size:XS This PR changes 0-9 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants