fix(dotAI): Dot AI LangChain4J - ProviderConfig fixes#35426
fix(dotAI): Dot AI LangChain4J - ProviderConfig fixes#35426ihoffmann-dot wants to merge 22 commits intomainfrom
Conversation
…credential merge on save
…s from getConfig response
|
Claude finished @ihoffmann-dot's task in 2m 44s —— View job PR Review
Issues1.
2. The method iterates over 3. Streaming runtime failures don't attempt fallback —
4. Failed model instances stay in cache after runtime errors — When 5. The 6. Reasoning-model regex is brittle — if (model.matches("o[0-9].*")) {
7.
The REST contract removal flagged in the previous comment (removing |
|
Pull Request Unsafe to Rollback!!!
- map.put(AppKeys.ROLE_PROMPT.key, appConfig.getRolePrompt());
- map.put(AppKeys.TEXT_PROMPT.key, appConfig.getTextPrompt());
- map.put(AppKeys.IMAGE_PROMPT.key, appConfig.getImagePrompt());
- map.put(AppKeys.IMAGE_SIZE.key, appConfig.getImageSize());
- map.put(AppKeys.LISTENER_INDEXER.key, appConfig.getListenerIndexer());The Postman test in
// Transitional — emit legacy keys from providerConfig values for one release
map.put(AppKeys.ROLE_PROMPT.key, appConfig.getRolePrompt()); |
| Logger.warn(LangChain4jAIClient.class, | ||
| "Streaming model '" + modelName + "' init failed: " + cause.getMessage() | ||
| + (models.size() > 1 ? " — trying next model" : "")); | ||
| continue; |
There was a problem hiding this comment.
this continue and return make harder the reading
also you should introduce some observability here, logs are ok but are not telling anything about how long it is take, etc
also you are using system prompt that never change, probably want to introduce a cache entry point to save tokens
There was a problem hiding this comment.
Good catch, I extracted the logic into initStreamingModel() and added timing logs. Regarding the cache, is a good observation but it would need some logic that we could implement in another PR.
| for (final String modelName : models) { | ||
| try { | ||
| final ProviderConfig modelConfig = ImmutableProviderConfig.copyOf(baseConfig).withModel(modelName); | ||
| final M model = modelCache.get( |
There was a problem hiding this comment.
for instance this method is easier to read
| */ | ||
| default List<String> allModels() { | ||
| final String m = model(); | ||
| if (m == null || m.isBlank()) { |
There was a problem hiding this comment.
usually if something may be null, Optional is a fantastic choice
PRs linked to this issue |
|
|
||
| @Test |
There was a problem hiding this comment.
I suggest adding javadoc to tests. We generally use this format:
/**
* Method to test: {@link StoryBlockAPI#refreshReferences(Contentlet)}
* Given Scenario: This will create 2 block contents, adds a rich content to each block content and retrieve the json.
* ExpectedResult: The new json will contain the rich text data map for each block content.
*/
Summary
Consolidates AI configuration by moving prompts and per-capability settings into the
providerConfigJSON sections, enabling a single-field setup in the Apps UI. Also adds comma-separated model fallback so any provider can specify multiple models in priority order.providerConfigvisible in the Apps UI (hidden: false) and remove legacy per-field secrets (rolePrompt,textPrompt,imagePrompt,imageSize,listenerIndexer) fromdotAI.ymlAppConfignow reads prompts and settings from inside theproviderConfigJSON sections instead of from separate app secretsProviderConfig.allModels(): parse comma-separatedmodelfield into a priority list for fallbackConfiguration
{ "chat": { "provider": "openai", "apiKey": "sk-...", "model": "gpt-4o,gpt-4o-mini", "maxTokens": 16384, "temperature": 1.0, "rolePrompt": "You are dotCMSbot, an AI assistant to help content creators.", "textPrompt": "Use Descriptive writing style." }, "embeddings": { "provider": "openai", "apiKey": "sk-...", "model": "text-embedding-ada-002", "listenerIndexer": { "default": "blog,news,webPageContent" } }, "image": { "provider": "openai", "apiKey": "sk-...", "model": "dall-e-3", "size": "1792x1024", "imagePrompt": "Use 16:9 aspect ratio." } }Notes
rolePrompt,textPrompt,imagePrompt) and embedding settings (listenerIndexer) now live insideproviderConfigsections. Separate App secrets for these fields are no longer supported."model": "gpt-4o,gpt-4o-mini"to try models in order. Sync requests fall back on both initialization and runtime failures; streaming falls back on initialization failures only (cannot retry once bytes are written to the output stream).Related Issue
feat(dotAI): Dot AI LangChain4J integration (First provider/OpenAI)
#35150