Skip to content

feat(provider): 添加 DeepSeek V4 思考模式开关和思考强度配置#7860

Open
piexian wants to merge 2 commits intoAstrBotDevs:masterfrom
piexian:feat/deepseek-thinking-mode
Open

feat(provider): 添加 DeepSeek V4 思考模式开关和思考强度配置#7860
piexian wants to merge 2 commits intoAstrBotDevs:masterfrom
piexian:feat/deepseek-thinking-mode

Conversation

@piexian
Copy link
Copy Markdown
Contributor

@piexian piexian commented Apr 28, 2026

Modifications / 改动点

新增 deepseek_thinking_enabled 和 deepseek_reasoning_effort 配置项, 自动注入 thinking 到 extra_body 并设置 reasoning_effort。

  • This is NOT a breaking change. / 这不是一个破坏性变更。

Screenshots or Test Results / 运行截图或测试结果

image

Checklist / 检查清单

  • 😊 If there are new features added in the PR, I have discussed it with the authors through issues/emails, etc.
    / 如果 PR 中有新加入的功能,已经通过 Issue / 邮件等方式和作者讨论过。

  • 👀 My changes have been well-tested, and "Verification Steps" and "Screenshots" have been provided above.
    / 我的更改经过了良好的测试,并已在上方提供了“验证步骤”和“运行截图”

  • 🤓 I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations in requirements.txt and pyproject.toml.
    / 我确保没有引入新依赖库,或者引入了新依赖库的同时将其添加到 requirements.txtpyproject.toml 文件相应位置。

  • 😮 My changes do not introduce malicious code.
    / 我的更改没有引入恶意代码。

Summary by Sourcery

Add configurable DeepSeek thinking mode and reasoning effort handling across provider, config, and dashboard, ensuring correct request payload shaping for DeepSeek models.

New Features:

  • Introduce provider-level flags to enable or disable DeepSeek thinking mode and to select reasoning effort level for DeepSeek requests.
  • Expose DeepSeek thinking mode and reasoning effort options in the default provider configuration and dashboard UI with sensible defaults.

Enhancements:

  • Normalize boolean-style config flags via a shared helper and centralize provider-specific request overrides for DeepSeek, including cleanup of conflicting extra body fields.

Tests:

  • Add tests verifying DeepSeek thinking mode disabling logic and automatic injection of thinking and reasoning_effort into DeepSeek query payloads.

新增 deepseek_thinking_enabled 和 deepseek_reasoning_effort 配置项,
自动注入 thinking 到 extra_body 并设置 reasoning_effort。
Copilot AI review requested due to automatic review settings April 28, 2026 04:42
@dosubot dosubot Bot added size:M This PR changes 30-99 lines, ignoring generated files. area:provider The bug / feature is about AI Provider, Models, LLM Agent, LLM Agent Runner. labels Apr 28, 2026
Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 1 issue, and left some high level feedback:

  • In _deepseek_reasoning_effort, you currently use payloads.setdefault('reasoning_effort', ...), which lets a caller-provided reasoning_effort silently override the provider config; consider explicitly deciding whether provider config or caller payload should have precedence and implement that (e.g., always overwrite or only allow a specific whitelist of values) to avoid surprising behavior.
  • Since _config_flag_enabled is now the canonical way to interpret bool-like config values, you might want to reuse it for other similar flags in this provider (and potentially centralize its usage) to keep behavior consistent across different providers’ boolean options.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `_deepseek_reasoning_effort`, you currently use `payloads.setdefault('reasoning_effort', ...)`, which lets a caller-provided `reasoning_effort` silently override the provider config; consider explicitly deciding whether provider config or caller payload should have precedence and implement that (e.g., always overwrite or only allow a specific whitelist of values) to avoid surprising behavior.
- Since `_config_flag_enabled` is now the canonical way to interpret bool-like config values, you might want to reuse it for other similar flags in this provider (and potentially centralize its usage) to keep behavior consistent across different providers’ boolean options.

## Individual Comments

### Comment 1
<location path="astrbot/core/provider/sources/openai_source.py" line_range="515-521" />
<code_context>
+            if normalized in {"high", "max"}:
+                return normalized
+        if value not in (None, ""):
+            logger.warning(
+                f"Invalid DeepSeek reasoning effort: {value}, falling back to high"
+            )
+        return "high"
+
</code_context>
<issue_to_address>
**suggestion (performance):** Repeated warnings for invalid DeepSeek reasoning effort might be noisy at runtime

Since `_deepseek_reasoning_effort` runs on every DeepSeek request, a bad setting will trigger this warning each time and can overwhelm logs in high-traffic environments. Consider validating once at config load, downgrading to debug or adding rate limiting, or caching the normalized value so the warning is only emitted once per invalid configuration.

```suggestion
    def _deepseek_reasoning_effort(self) -> str:
        # Cache the normalized value so we only validate and potentially log once
        cached = getattr(self, "_deepseek_reasoning_effort_cached", None)
        if cached is not None:
            return cached

        value = self.provider_config.get("deepseek_reasoning_effort", "high")
        normalized_value = "high"

        if isinstance(value, str):
            normalized = value.strip().lower()
            if normalized in {"high", "max"}:
                normalized_value = normalized
            elif value not in (None, ""):
                logger.warning(
                    f"Invalid DeepSeek reasoning effort: {value}, falling back to high"
                )
        elif value not in (None, ""):
            logger.warning(
                f"Invalid DeepSeek reasoning effort: {value}, falling back to high"
            )

        self._deepseek_reasoning_effort_cached = normalized_value
        return normalized_value
```
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment thread astrbot/core/provider/sources/openai_source.py Outdated
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for DeepSeek's thinking mode and reasoning effort, including configuration defaults, request override logic in the OpenAI source, and dashboard integration. Feedback identifies that enabling thinking mode by default may lead to 400 errors with the official DeepSeek API due to non-standard parameter formatting. Further improvements are suggested to ensure compatibility with older OpenAI SDK versions when handling the reasoning_effort parameter and to adhere to standard OpenAI values for reasoning effort levels.

Comment thread astrbot/core/config/default.py
Comment thread astrbot/core/provider/sources/openai_source.py
Comment thread astrbot/core/config/default.py
- 将 reasoning_effort 从 setdefault 改为直接赋值,确保 provider 配置始终生效
- 缓存 _deepseek_reasoning_effort 结果,避免重复计算和重复警告
- 新增测试覆盖配置优先级和警告去重场景
@dosubot dosubot Bot added size:L This PR changes 100-499 lines, ignoring generated files. and removed size:M This PR changes 30-99 lines, ignoring generated files. labels Apr 28, 2026
@piexian piexian changed the title feat(provider): 添加 DeepSeek 思考模式开关和思考强度配置 feat(provider): 添加 DeepSeek V4 思考模式开关和思考强度配置 Apr 28, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:provider The bug / feature is about AI Provider, Models, LLM Agent, LLM Agent Runner. size:L This PR changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant