HYPERFLEET-860 - feat: add delete and update lifecycle tests for resources#85
Conversation
📝 WalkthroughSummary by CodeRabbit
WalkthroughRefactors E2E async waits to use poller closures + Gomega matchers (pkg/helper/pollers.go, pkg/helper/matchers.go) and removes blocking WaitFor* helpers (pkg/helper/wait.go). Updates tests to use Eventually(h.Poll...) with helper.Have... matchers and adds Tier0 suites for cluster/nodepool delete and update lifecycles. Client APIs gain HTTPError and implemented Delete/Patch methods for clusters and nodepools. Helper surface trimmed (nodepool create/cleanup helpers removed). Adapter task configs and test payloads were made deletion-aware. Documentation and test-case metadata were updated. Sequence Diagram(s)sequenceDiagram
participant Test
participant Helper
participant Client as HyperFleet Client
participant API as HyperFleet API
participant Adapter as Adapter Status
Test->>Helper: CreateClusterFromPayload(path)
Helper->>Client: CreateCluster(req)
Client->>API: POST /clusters
API-->>Client: 202 Accepted (cluster)
Client-->>Helper: Cluster
loop Eventually (poll until Reconciled=True)
Test->>Helper: PollCluster(ctx, id)()
Helper->>Client: GetCluster(id)
Client->>API: GET /clusters/{id}
API-->>Client: 200 + Cluster(status)
Client-->>Helper: Cluster
Helper-->>Test: Cluster
end
Test->>Client: PatchCluster(id, patch)
Client->>API: PATCH /clusters/{id}
API-->>Client: 200 OK (generation+1)
Client-->>Test: Cluster (gen N+1)
loop Eventually (poll adapters until all at gen N+1)
Test->>Helper: PollClusterAdapterStatuses(ctx, id)()
Helper->>Client: GetClusterAdapterStatuses(id)
Client->>API: GET /clusters/{id}/adapters
API-->>Client: AdapterStatusList
Client-->>Helper: AdapterStatusList
Helper-->>Test: AdapterStatusList
end
Test->>Test: Assert with matchers (HaveResourceCondition / HaveAllAdaptersAtGeneration)
sequenceDiagram
participant Test
participant Helper
participant Client as HyperFleet Client
participant API as HyperFleet API
participant K8s as Kubernetes API
Test->>Helper: DeleteCluster(clusterID)
Helper->>Client: DeleteClusterById(clusterID)
Client->>API: DELETE /clusters/{id}
API-->>Client: 202 Accepted (DeletedTime set, gen++)
Client-->>Helper: Cluster (DeletedTime, gen)
Test->>Test: Verify DeletedTime non-nil, Generation incremented
loop Eventually (poll adapters until Finalized or hard-delete)
Test->>Helper: PollClusterAdapterStatuses(ctx, id)()
Helper->>Client: GetClusterAdapterStatuses(id)
Client->>API: GET /clusters/{id}/adapters
API-->>Client: AdapterStatusList
Client-->>Helper: AdapterStatusList
Helper-->>Test: AdapterStatusList
end
loop Eventually (poll HTTP status until 404)
Test->>Helper: PollClusterHTTPStatus(ctx, id)()
Helper->>Client: GetCluster(id)
alt exists
Client->>API: GET /clusters/{id}
API-->>Client: 200 OK
Client-->>Helper: 200
else deleted
API-->>Client: 404 Not Found
Client-->>Helper: 404
end
Helper-->>Test: status
end
Test->>Helper: PollNamespacesByPrefix(ctx, prefix)()
Helper->>K8s: FindNamespacesByPrefix(prefix)
K8s-->>Helper: []namespaces
Helper-->>Test: []namespaces
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
e2e/nodepool/creation.go (1)
247-256:⚠️ Potential issue | 🟠 Major | ⚡ Quick winDon't let the pre-cleanup readiness wait block teardown.
If the cluster never becomes Ready, this
Eventually(...).Should(...)fails beforeCleanupTestClusterruns, which can leave the test cluster/nodepool behind and pollute later runs. Make the readiness check best-effort, or ensure cleanup is guaranteed through a separate always-run path. As per coding guidelines, always implement resource cleanup in AfterEach blocks.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/nodepool/creation.go` around lines 247 - 256, The readiness wait must not block teardown: change the hard assertion using Eventually(h.PollCluster(ctx, clusterID), ...).Should(...) into a best-effort check that logs failures but does not fail the test, and ensure CleanupTestCluster(clusterID) always runs (move/duplicate cleanup into an AfterEach or a deferred/always-run path). Concretely, replace the strict Eventually(...).Should(helper.HaveResourceCondition(...)) with a non-fatal check that captures errors/results and logs them (or uses Ginkgo’s By + framework log) then proceed, and guarantee invocation of h.CleanupTestCluster(ctx, clusterID) from an AfterEach or a finally-style block so cleanup runs even when the readiness check times out; reference PollCluster and CleanupTestCluster to locate the changes.
🧹 Nitpick comments (7)
e2e/nodepool/update.go (2)
15-16: ⚡ Quick winAlign suite title with the required naming format.
Use
[Suite: nodepool] NodePool Update Lifecycle(without the extra[update]token) to match the mandated suite-title convention.As per coding guidelines
e2e/**/*.go: Test name must be formatted as[Suite: component] Description.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/nodepool/update.go` around lines 15 - 16, Update the ginkgo.Describe suite title string used in the test declaration so it matches the mandated format: replace the current title passed to ginkgo.Describe (the string starting with "[Suite: nodepool][update] NodePool Update Lifecycle") with "[Suite: nodepool] NodePool Update Lifecycle"; locate the ginkgo.Describe call in this file (the var _ = ginkgo.Describe(...) declaration) and adjust only the title string to remove the extra "[update]" token.
69-70: ⚡ Quick winUse
helper.HaveResourceConditiondirectly for final condition assertions.These checks currently use
HasResourceConditionbooleans; switch to matcher-based assertions for consistency and clearer failures.Proposed change
- hasReconciled := h.HasResourceCondition(finalNP.Status.Conditions, client.ConditionTypeReconciled, openapi.ResourceConditionStatusTrue) - Expect(hasReconciled).To(BeTrue(), "nodepool should have Reconciled=True") + Expect(finalNP).To(helper.HaveResourceCondition(client.ConditionTypeReconciled, openapi.ResourceConditionStatusTrue), + "nodepool should have Reconciled=True") - hasParentReconciled := h.HasResourceCondition(parentCluster.Status.Conditions, client.ConditionTypeReconciled, openapi.ResourceConditionStatusTrue) - Expect(hasParentReconciled).To(BeTrue(), "parent cluster should remain Reconciled=True") + Expect(parentCluster).To(helper.HaveResourceCondition(client.ConditionTypeReconciled, openapi.ResourceConditionStatusTrue), + "parent cluster should remain Reconciled=True")As per coding guidelines
e2e/**/*.go: Usehelper.HaveResourceCondition()custom matcher to verify resource conditions instead of inline assertions.Also applies to: 83-84
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/nodepool/update.go` around lines 69 - 70, Replace the boolean-style checks that call h.HasResourceCondition with the matcher-based assertion helper.HaveResourceCondition to get clearer failures: instead of computing hasReconciled := h.HasResourceCondition(finalNP.Status.Conditions, client.ConditionTypeReconciled, openapi.ResourceConditionStatusTrue) and Expect(hasReconciled).To(BeTrue(), ...), call Expect(finalNP.Status.Conditions).To(helper.HaveResourceCondition(client.ConditionTypeReconciled, openapi.ResourceConditionStatusTrue)) (and do the same replacement for the similar check around lines 83-84) so the assertions use the custom matcher directly against finalNP.Status.Conditions.e2e/cluster/delete.go (2)
16-17: ⚡ Quick winUpdate suite titles to the required format.
Both suite names should follow
[Suite: cluster] Descriptionwithout the extra[delete]bracket token.As per coding guidelines
e2e/**/*.go: Test name must be formatted as[Suite: component] Description.Also applies to: 111-112
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/cluster/delete.go` around lines 16 - 17, The test suite titles include an extra bracketed token; update the ginkgo.Describe calls (e.g., the top-level ginkgo.Describe string currently "[Suite: cluster][delete] Cluster Deletion Lifecycle") to follow the required format "[Suite: cluster] Cluster Deletion Lifecycle" (remove the extra "[delete]" token) and apply the same change to the other ginkgo.Describe invocation referenced (lines 111-112) so all suite names use the pattern "[Suite: component] Description".
49-59: ⚡ Quick winPrefer adapter custom matchers over manual adapter-condition loops.
Use
HaveAllAdaptersWithConditionassertions here for the post-finalization condition checks to keep adapter verification consistent and centralized.As per coding guidelines
e2e/**/*.go: Usehelper.HaveAllAdaptersWithCondition()andhelper.HaveAllAdaptersAtGeneration()matchers for adapter status verification.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/cluster/delete.go` around lines 49 - 59, Replace the manual loop over statuses.Items with the centralized matcher-based checks: after calling h.Client.GetClusterStatuses(ctx, clusterID) use Expect(statuses).To(helper.HaveAllAdaptersWithCondition(client.ConditionTypeApplied, openapi.AdapterConditionStatusFalse)), Expect(statuses).To(helper.HaveAllAdaptersWithCondition(client.ConditionTypeAvailable, openapi.AdapterConditionStatusFalse)), and Expect(statuses).To(helper.HaveAllAdaptersWithCondition(client.ConditionTypeHealth, openapi.AdapterConditionStatusTrue)); also add an Expect(statuses).To(helper.HaveAllAdaptersAtGeneration(...)) if generation verification is required. This removes the explicit for-loop and uses helper.HaveAllAdaptersWithCondition and helper.HaveAllAdaptersAtGeneration matchers for centralized adapter status checks.e2e/nodepool/delete.go (3)
16-17: ⚡ Quick winNormalize suite title to the required format.
Rename to
[Suite: nodepool] NodePool Deletion Lifecycleto match the enforced suite naming pattern.As per coding guidelines
e2e/**/*.go: Test name must be formatted as[Suite: component] Description.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/nodepool/delete.go` around lines 16 - 17, Update the ginkgo.Describe suite title string used in the var _ = ginkgo.Describe(...) declaration: change the title from "[Suite: nodepool][delete] NodePool Deletion Lifecycle" to the normalized format "[Suite: nodepool] NodePool Deletion Lifecycle" so it matches the enforced `[Suite: component] Description` pattern; locate the ginkgo.Describe call in this file and replace the combined tag string accordingly.
60-70: ⚡ Quick winUse adapter custom matchers instead of manual condition loops.
This section should rely on
HaveAllAdaptersWithConditionforApplied=False,Available=False, andHealth=Truechecks.As per coding guidelines
e2e/**/*.go: Usehelper.HaveAllAdaptersWithCondition()andhelper.HaveAllAdaptersAtGeneration()matchers for adapter status verification.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/nodepool/delete.go` around lines 60 - 70, Replace the manual loop that iterates over statuses.Items and calls h.HasAdapterCondition with the helper matchers: after calling h.Client.GetNodePoolStatuses(ctx, clusterID, nodepoolID) assert that statuses.Items satisfy helper.HaveAllAdaptersWithCondition for Applied=False, Available=False and Health=True respectively (three Expect calls), and optionally use helper.HaveAllAdaptersAtGeneration if generation checks are needed; remove the for loop and any direct uses of HasAdapterCondition so the test uses helper.HaveAllAdaptersWithCondition() (and HaveAllAdaptersAtGeneration()) for adapter status verification.
82-84: ⚡ Quick winSwitch parent-cluster condition check to
HaveResourceCondition.Using the matcher here keeps condition verification consistent with the rest of the suite pattern.
As per coding guidelines
e2e/**/*.go: Usehelper.HaveResourceCondition()custom matcher to verify resource conditions instead of inline assertions.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/nodepool/delete.go` around lines 82 - 84, Replace the inline boolean check using h.HasResourceCondition and Expect(hasReconciled).To(BeTrue()) with the suite's custom matcher helper.HaveResourceCondition: call Expect(parentCluster).To(helper.HaveResourceCondition(client.ConditionTypeReconciled, openapi.ResourceConditionStatusTrue)) (removing the temporary hasReconciled variable and the manual BeTrue assertion) so condition verification matches the rest of the e2e suite.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@e2e/cluster/delete.go`:
- Around line 157-167: The two Eventually checks that call h.Client.GetNodePool
for nodepoolID1 and nodepoolID2 are race-prone because GetNodePool may return a
404 if the child nodepool hard-deletes quickly; update each Eventually lambda
(the calls invoking h.Client.GetNodePool and asserting on npX.DeletedTime) to
treat either a successful response with np.DeletedTime != nil OR an HTTP 404
(not-found) error as a success: call GetNodePool, if err is a not-found/404
treat the check as passed, otherwise assert no error and that np.DeletedTime is
not nil.
In `@e2e/nodepool/concurrent_creation.go`:
- Around line 177-183: The readiness wait
(Eventually(h.PollCluster(...)).Should(HaveResourceCondition(...))) must not
block cleanup; change the test so the readiness check is non-fatal and cleanup
always runs: either move the cleanup call into an AfterEach/teardown hook that
always invokes h.CleanupTestCluster(ctx, clusterID), or make the
Immediately-executed readiness probe only log the result (capture the Eventually
result from h.PollCluster(ctx, clusterID) and if it times out, call
processLogger or GinkgoWriter to report the failure but do not call
Should/Fail), then always call h.CleanupTestCluster(ctx, clusterID) afterwards;
reference: PollCluster, HaveResourceCondition, Eventually, and
CleanupTestCluster to locate the code to change.
In `@pkg/client/client.go`:
- Around line 54-60: When checking resp.StatusCode != expectedStatus in the
client code, don't ignore the error from io.ReadAll(resp.Body); capture the
(body, err) result and if err != nil include a clear read-failure message in the
returned HTTPError.Body (e.g., "failed to read response body: <err>") or combine
any partial body with the read error so callers get diagnostic info; update the
return that constructs the HTTPError (referencing HTTPError, resp.Body,
io.ReadAll, expectedStatus and action) to surface the read error instead of
silencing it.
---
Outside diff comments:
In `@e2e/nodepool/creation.go`:
- Around line 247-256: The readiness wait must not block teardown: change the
hard assertion using Eventually(h.PollCluster(ctx, clusterID), ...).Should(...)
into a best-effort check that logs failures but does not fail the test, and
ensure CleanupTestCluster(clusterID) always runs (move/duplicate cleanup into an
AfterEach or a deferred/always-run path). Concretely, replace the strict
Eventually(...).Should(helper.HaveResourceCondition(...)) with a non-fatal check
that captures errors/results and logs them (or uses Ginkgo’s By + framework log)
then proceed, and guarantee invocation of h.CleanupTestCluster(ctx, clusterID)
from an AfterEach or a finally-style block so cleanup runs even when the
readiness check times out; reference PollCluster and CleanupTestCluster to
locate the changes.
---
Nitpick comments:
In `@e2e/cluster/delete.go`:
- Around line 16-17: The test suite titles include an extra bracketed token;
update the ginkgo.Describe calls (e.g., the top-level ginkgo.Describe string
currently "[Suite: cluster][delete] Cluster Deletion Lifecycle") to follow the
required format "[Suite: cluster] Cluster Deletion Lifecycle" (remove the extra
"[delete]" token) and apply the same change to the other ginkgo.Describe
invocation referenced (lines 111-112) so all suite names use the pattern
"[Suite: component] Description".
- Around line 49-59: Replace the manual loop over statuses.Items with the
centralized matcher-based checks: after calling h.Client.GetClusterStatuses(ctx,
clusterID) use
Expect(statuses).To(helper.HaveAllAdaptersWithCondition(client.ConditionTypeApplied,
openapi.AdapterConditionStatusFalse)),
Expect(statuses).To(helper.HaveAllAdaptersWithCondition(client.ConditionTypeAvailable,
openapi.AdapterConditionStatusFalse)), and
Expect(statuses).To(helper.HaveAllAdaptersWithCondition(client.ConditionTypeHealth,
openapi.AdapterConditionStatusTrue)); also add an
Expect(statuses).To(helper.HaveAllAdaptersAtGeneration(...)) if generation
verification is required. This removes the explicit for-loop and uses
helper.HaveAllAdaptersWithCondition and helper.HaveAllAdaptersAtGeneration
matchers for centralized adapter status checks.
In `@e2e/nodepool/delete.go`:
- Around line 16-17: Update the ginkgo.Describe suite title string used in the
var _ = ginkgo.Describe(...) declaration: change the title from "[Suite:
nodepool][delete] NodePool Deletion Lifecycle" to the normalized format "[Suite:
nodepool] NodePool Deletion Lifecycle" so it matches the enforced `[Suite:
component] Description` pattern; locate the ginkgo.Describe call in this file
and replace the combined tag string accordingly.
- Around line 60-70: Replace the manual loop that iterates over statuses.Items
and calls h.HasAdapterCondition with the helper matchers: after calling
h.Client.GetNodePoolStatuses(ctx, clusterID, nodepoolID) assert that
statuses.Items satisfy helper.HaveAllAdaptersWithCondition for Applied=False,
Available=False and Health=True respectively (three Expect calls), and
optionally use helper.HaveAllAdaptersAtGeneration if generation checks are
needed; remove the for loop and any direct uses of HasAdapterCondition so the
test uses helper.HaveAllAdaptersWithCondition() (and
HaveAllAdaptersAtGeneration()) for adapter status verification.
- Around line 82-84: Replace the inline boolean check using
h.HasResourceCondition and Expect(hasReconciled).To(BeTrue()) with the suite's
custom matcher helper.HaveResourceCondition: call
Expect(parentCluster).To(helper.HaveResourceCondition(client.ConditionTypeReconciled,
openapi.ResourceConditionStatusTrue)) (removing the temporary hasReconciled
variable and the manual BeTrue assertion) so condition verification matches the
rest of the e2e suite.
In `@e2e/nodepool/update.go`:
- Around line 15-16: Update the ginkgo.Describe suite title string used in the
test declaration so it matches the mandated format: replace the current title
passed to ginkgo.Describe (the string starting with "[Suite: nodepool][update]
NodePool Update Lifecycle") with "[Suite: nodepool] NodePool Update Lifecycle";
locate the ginkgo.Describe call in this file (the var _ = ginkgo.Describe(...)
declaration) and adjust only the title string to remove the extra "[update]"
token.
- Around line 69-70: Replace the boolean-style checks that call
h.HasResourceCondition with the matcher-based assertion
helper.HaveResourceCondition to get clearer failures: instead of computing
hasReconciled := h.HasResourceCondition(finalNP.Status.Conditions,
client.ConditionTypeReconciled, openapi.ResourceConditionStatusTrue) and
Expect(hasReconciled).To(BeTrue(), ...), call
Expect(finalNP.Status.Conditions).To(helper.HaveResourceCondition(client.ConditionTypeReconciled,
openapi.ResourceConditionStatusTrue)) (and do the same replacement for the
similar check around lines 83-84) so the assertions use the custom matcher
directly against finalNP.Status.Conditions.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Enterprise
Run ID: e06f554e-da24-414d-9a2f-e86b1294dd37
📒 Files selected for processing (24)
CLAUDE.mddocs/development.mde2e/cluster/concurrent_creation.goe2e/cluster/creation.goe2e/cluster/delete.goe2e/cluster/update.goe2e/nodepool/concurrent_creation.goe2e/nodepool/creation.goe2e/nodepool/delete.goe2e/nodepool/update.gopkg/client/client.gopkg/client/cluster.gopkg/client/constants.gopkg/client/nodepool.gopkg/helper/helper.gopkg/helper/matchers.gopkg/helper/pollers.gopkg/helper/wait.gotest-design/testcases/delete-cluster.mdtest-design/testcases/delete-nodepool.mdtest-design/testcases/update-cluster.mdtest-design/testcases/update-nodepool.mdtestdata/payloads/clusters/cluster-patch.jsontestdata/payloads/nodepools/nodepool-patch.json
💤 Files with no reviewable changes (1)
- pkg/helper/wait.go
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
e2e/nodepool/delete.go (1)
16-17: ⚡ Quick winAlign suite name with the required test naming convention
Please rename to
[Suite: nodepool] ...format instead of[Suite: nodepool][delete] ....As per coding guidelines,
e2e/**/*.go: Test name must follow format[Suite: component] Description(e.g.,[Suite: cluster] Create Cluster via API).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/nodepool/delete.go` around lines 16 - 17, The suite name string passed to ginkgo.Describe currently uses "[Suite: nodepool][delete] NodePool Deletion Lifecycle"; update that string to follow the required format by removing the extra "[delete]" segment so it becomes "[Suite: nodepool] NodePool Deletion Lifecycle" (leave the surrounding ginkgo.Describe call, labels, and other code unchanged).e2e/cluster/update.go (1)
15-16: ⚡ Quick winNormalize suite title to the required naming format
Use
[Suite: cluster] Descriptionformat; the current title includes an extra[update]segment.As per coding guidelines,
e2e/**/*.go: Test name must follow format[Suite: component] Description(e.g.,[Suite: cluster] Create Cluster via API).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/cluster/update.go` around lines 15 - 16, Update the ginkgo.Describe test title string to follow the required naming format by removing the extra "[update]" segment; locate the var _ = ginkgo.Describe(...) call and change its first argument to "[Suite: cluster] Cluster Update Lifecycle" so it matches the "[Suite: component] Description" convention used across e2e tests.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@e2e/cluster/update.go`:
- Around line 50-64: The check is racy: you poll only for Reconciled=True via
h.PollCluster/HaveResourceCondition then immediately assert the Reconciled
condition's ObservedGeneration equals expectedGen on a single read which can
fail; modify the polling to wait until the cluster's Reconciled condition has
ObservedGeneration == expectedGen before reading finalCluster. Replace the
current Eventually(h.PollCluster(...).Should(HaveResourceCondition(...))) +
single-shot GetCluster/assert with a single polling Eventually that either (a)
uses a new matcher like
helper.HaveResourceConditionWithObservedGeneration(client.ConditionTypeReconciled,
openapi.ResourceConditionStatusTrue, expectedGen) or (b) uses a closure that
calls h.Client.GetCluster(ctx, clusterID) and returns true only when
HasResourceCondition(finalCluster.Status.Conditions,
client.ConditionTypeReconciled, openapi.ResourceConditionStatusTrue) is true AND
the matching condition.ObservedGeneration == expectedGen; then fetch
finalCluster and perform remaining assertions.
In `@pkg/client/client.go`:
- Around line 57-63: The current branch returns a generic fmt.Errorf when
reading the response body fails, which loses the typed *HTTPError needed for
status-based polling; instead, return an &HTTPError with StatusCode set to
resp.StatusCode and Action set to action, and put the body-read failure details
into the Body field (e.g., include the read error message and any partial body
if available) so callers can errors.As(..., *client.HTTPError) and still access
StatusCode.
---
Nitpick comments:
In `@e2e/cluster/update.go`:
- Around line 15-16: Update the ginkgo.Describe test title string to follow the
required naming format by removing the extra "[update]" segment; locate the var
_ = ginkgo.Describe(...) call and change its first argument to "[Suite: cluster]
Cluster Update Lifecycle" so it matches the "[Suite: component] Description"
convention used across e2e tests.
In `@e2e/nodepool/delete.go`:
- Around line 16-17: The suite name string passed to ginkgo.Describe currently
uses "[Suite: nodepool][delete] NodePool Deletion Lifecycle"; update that string
to follow the required format by removing the extra "[delete]" segment so it
becomes "[Suite: nodepool] NodePool Deletion Lifecycle" (leave the surrounding
ginkgo.Describe call, labels, and other code unchanged).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Enterprise
Run ID: 754f0bd8-d756-4501-8b8d-90dc13f5eff9
📒 Files selected for processing (14)
CLAUDE.mddocs/development.mde2e/cluster/concurrent_creation.goe2e/cluster/creation.goe2e/cluster/delete.goe2e/cluster/update.goe2e/nodepool/concurrent_creation.goe2e/nodepool/creation.goe2e/nodepool/delete.goe2e/nodepool/update.gopkg/client/client.gopkg/helper/matchers.gopkg/helper/pollers.gopkg/helper/wait.go
💤 Files with no reviewable changes (1)
- pkg/helper/wait.go
✅ Files skipped from review due to trivial changes (2)
- e2e/cluster/creation.go
- docs/development.md
🚧 Files skipped from review as they are similar to previous changes (7)
- e2e/cluster/concurrent_creation.go
- pkg/helper/pollers.go
- e2e/nodepool/concurrent_creation.go
- e2e/nodepool/creation.go
- e2e/cluster/delete.go
- e2e/nodepool/update.go
- CLAUDE.md
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@e2e/nodepool/creation.go`:
- Around line 251-256: The AfterEach currently uses a fatal
Eventually(...).Should(helper.HaveResourceCondition(...)) which can abort the
test flow and prevent cleanup; change it to perform a best-effort readiness
check without failing the AfterEach (e.g., call h.PollCluster(ctx, clusterID)
and capture the error/result of helper.HaveResourceCondition instead of using
Should), log any readiness timeout or error via ginkgo.GinkgoWriter.Printf(),
then always call h.CleanupTestCluster(ctx, clusterID) and on cleanup failure log
the error with ginkgo.GinkgoWriter.Printf() rather than using Expect; reference
functions: h.PollCluster, helper.HaveResourceCondition, h.CleanupTestCluster,
and the AfterEach block.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Enterprise
Run ID: 6f60e022-2ce6-4825-93f7-e13a7e7b248f
📒 Files selected for processing (14)
CLAUDE.mddocs/development.mde2e/cluster/concurrent_creation.goe2e/cluster/creation.goe2e/cluster/delete.goe2e/cluster/update.goe2e/nodepool/concurrent_creation.goe2e/nodepool/creation.goe2e/nodepool/delete.goe2e/nodepool/update.gopkg/client/client.gopkg/helper/matchers.gopkg/helper/pollers.gopkg/helper/wait.go
💤 Files with no reviewable changes (1)
- pkg/helper/wait.go
✅ Files skipped from review due to trivial changes (2)
- CLAUDE.md
- docs/development.md
🚧 Files skipped from review as they are similar to previous changes (2)
- e2e/cluster/concurrent_creation.go
- e2e/nodepool/update.go
There was a problem hiding this comment.
Actionable comments posted: 7
♻️ Duplicate comments (2)
e2e/nodepool/concurrent_creation.go (1)
36-40:⚠️ Potential issue | 🟠 Major | ⚡ Quick winRestore explicit
AfterEachcleanup and avoid hard-failing teardown readiness checks.Line 184 can fail teardown before cleanup logic in this hook, and
AfterEachno longer callsh.CleanupTestCluster(ctx, clusterID)directly. Keep readiness check best-effort and always clean up fromAfterEach.Suggested refactor
- ginkgo.DeferCleanup(func(ctx context.Context) { - if err := h.CleanupTestCluster(ctx, clusterID); err != nil { - ginkgo.GinkgoWriter.Printf("Warning: failed to cleanup cluster %s: %v\n", clusterID, err) - } - }) @@ ginkgo.AfterEach(func(ctx context.Context) { if h == nil || clusterID == "" { return } ginkgo.By("Verify final cluster state to ensure Reconciled before cleanup") - Eventually(h.PollCluster(ctx, clusterID), h.Cfg.Timeouts.Cluster.Ready, h.Cfg.Polling.Interval). - Should(helper.HaveResourceCondition(client.ConditionTypeReconciled, openapi.ResourceConditionStatusTrue)) + if failure := InterceptGomegaFailure(func() { + Eventually(h.PollCluster(ctx, clusterID), h.Cfg.Timeouts.Cluster.Ready, h.Cfg.Polling.Interval). + Should(helper.HaveResourceCondition(client.ConditionTypeReconciled, openapi.ResourceConditionStatusTrue)) + }); failure != "" { + ginkgo.GinkgoWriter.Printf("Warning: cluster %s was not Reconciled before cleanup: %s\n", clusterID, failure) + } + if err := h.CleanupTestCluster(ctx, clusterID); err != nil { + ginkgo.GinkgoWriter.Printf("Warning: cleanup failed for cluster %s: %v\n", clusterID, err) + } })As per coding guidelines,
e2e/**/*.go: “Always implementAfterEach()cleanup for test resources, checking ifh == nilorresourceID == ""before attempting cleanup, and log warnings if cleanup fails”.Also applies to: 178-186
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@e2e/nodepool/concurrent_creation.go` around lines 36 - 40, Restore an explicit AfterEach that always calls h.CleanupTestCluster(ctx, clusterID) (guarded by checks for h == nil or clusterID == ""), and move the readiness/teardown checks in ginkgo.DeferCleanup/ginkgo hooks to be best-effort (log warnings rather than hard-fail) so they don't prevent the AfterEach cleanup from running; specifically, re-add an AfterEach that calls h.CleanupTestCluster, ensure any readiness check code referenced by teardown hooks does not call t.Fatal/Fail or return errors that stop cleanup, and update the ginkgo.DeferCleanup block and any teardown readiness logic to only log warnings on failure (using ginkgo.GinkgoWriter.Printf or similar) while leaving CleanupTestCluster as the guaranteed cleanup path.e2e/nodepool/creation.go (1)
34-38:⚠️ Potential issue | 🟠 Major | ⚡ Quick winKeep teardown cleanup in
AfterEachand make pre-cleanup readiness check non-fatal.Line 252 uses a fatal
Should(...)in teardown, and this file no longer performsh.CleanupTestCluster(ctx, clusterID)insideAfterEach. That makes teardown behavior brittle and inconsistent with suite conventions.Suggested refactor
- ginkgo.DeferCleanup(func(ctx context.Context) { - if err := h.CleanupTestCluster(ctx, clusterID); err != nil { - ginkgo.GinkgoWriter.Printf("Warning: failed to cleanup cluster %s: %v\n", clusterID, err) - } - }) @@ ginkgo.AfterEach(func(ctx context.Context) { if h == nil || clusterID == "" { return } ginkgo.By("Verify final cluster state to ensure Reconciled before cleanup") - Eventually(h.PollCluster(ctx, clusterID), h.Cfg.Timeouts.Cluster.Ready, h.Cfg.Polling.Interval). - Should(helper.HaveResourceCondition(client.ConditionTypeReconciled, openapi.ResourceConditionStatusTrue)) + if failure := InterceptGomegaFailure(func() { + Eventually(h.PollCluster(ctx, clusterID), h.Cfg.Timeouts.Cluster.Ready, h.Cfg.Polling.Interval). + Should(helper.HaveResourceCondition(client.ConditionTypeReconciled, openapi.ResourceConditionStatusTrue)) + }); failure != "" { + ginkgo.GinkgoWriter.Printf("Warning: cluster %s was not Reconciled before cleanup: %s\n", clusterID, failure) + } + if err := h.CleanupTestCluster(ctx, clusterID); err != nil { + ginkgo.GinkgoWriter.Printf("Warning: cleanup failed for cluster %s: %v\n", clusterID, err) + } })As per coding guidelines,
e2e/**/*.go: “Always implementAfterEach()cleanup for test resources, checking ifh == nilorresourceID == ""before attempting cleanup, and log warnings if cleanup fails”.Also applies to: 246-254
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@e2e/nodepool/creation.go` around lines 34 - 38, Move the teardown to a proper AfterEach by replacing the fatal Should(...) call and the ginkgo.DeferCleanup usage with an AfterEach that performs non-fatal cleanup: in AfterEach check if h == nil or clusterID == "" and return early; otherwise call h.CleanupTestCluster(ctx, clusterID) and log any error with ginkgo.GinkgoWriter.Printf("Warning: failed to cleanup cluster %s: %v\n", clusterID, err) instead of failing the test; also make the current pre-cleanup readiness check non-fatal (do not use gomega.Should/Fatal) so readiness failures only log warnings and do not abort teardown.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@e2e/cluster/delete.go`:
- Around line 50-70: During the Eventually polling block (the anonymous func
using h.PollClusterHTTPStatus and h.Client.GetClusterStatuses) treat a 404 from
GetClusterStatuses as a terminal success: after calling
h.Client.GetClusterStatuses(ctx, clusterID) check whether the error indicates
http.StatusNotFound (or the returned HTTP status is 404) and if so return from
the closure (same behavior as the earlier httpStatus==http.StatusNotFound check)
instead of failing the Expect; update the closure around GetClusterStatuses in
the Eventually to short-circuit and consider the poll successful when statuses
are 404.
In `@e2e/nodepool/delete.go`:
- Around line 60-80: The Eventually block can race with hard-delete because
GetNodePoolStatuses may return 404 even after PollNodePoolHTTPStatus showed
non-404; modify the closure so after calling h.Client.GetNodePoolStatuses(ctx,
clusterID, nodepoolID) you treat a 404 (NotFound) error as a success/early
return: detect the 404 condition from the returned error and return from the
Eventually closure (same behavior as when PollNodePoolHTTPStatus returns
http.StatusNotFound), otherwise keep the existing
g.Expect(err).NotTo(HaveOccurred()) and the adapter presence/Finalized checks
(references: PollNodePoolHTTPStatus, GetNodePoolStatuses, HasAdapterCondition,
h.Cfg.Adapters.NodePool).
In `@pkg/client/cluster.go`:
- Around line 76-122: The calls to DeleteClusterById and PatchClusterById are
failing because the OpenAPI-generated client is missing; run the project's
codegen (invoke "make generate") to regenerate the OpenAPI client so methods
like DeleteClusterById and PatchClusterById exist, then commit the generated
client package; after generating, confirm the generated method names match the
calls in HyperFleetClient (e.g., DeleteClusterById, PatchClusterById used in
DeleteCluster, PatchCluster, and PatchClusterRaw) and update any mismatched call
sites or imports accordingly.
In `@pkg/client/nodepool.go`:
- Around line 72-88: The build fails because the generated OpenAPI client
package (pkg/api/openapi) is missing and methods like DeleteNodePoolById /
PatchNodePoolById referenced by DeleteNodePool (and other functions) don't
exist; run the OpenAPI code generation (e.g., the Makefile target that produces
pkg/api/openapi/openapi.gen.go), then rebuild and update imports/usages to match
the exact method names/signatures in the generated client (verify
DeleteNodePoolById, PatchNodePoolById or rename calls to the generated names in
nodepool.go and client.go), and finally ensure the repo passes make fmt, make
vet, and make lint before pushing.
In `@pkg/helper/helper.go`:
- Around line 185-186: The helper currently calls h.Client.DeleteNodePool(ctx,
clusterID, nodepoolID) and returns immediately, but DeleteNodePool now only
enqueues a soft-delete; update CleanupTestNodePool to wait until the nodepool is
actually removed by polling the nodepool status (using h.Client.GetNodePool(ctx,
clusterID, nodepoolID)) and loop until either GetNodePool returns a 404 (not
found) or the returned NodePool resource has Finalized == true; use the existing
ctx for cancellation, add a sensible timeout/retry/backoff and short sleep
between polls, and return any final error if the wait fails or context expires.
In `@pkg/helper/matchers.go`:
- Around line 66-74: The matcher dereferences a possibly typed-nil
*openapi.AdapterStatusList (list.Items) causing a panic; add an explicit nil
guard at the top of allAdaptersConditionMatcher.Match (and the similar matcher
at lines ~114-123, e.g., anyAdaptersConditionMatcher.Match) that checks if list
== nil and handles it safely (either treat it as an empty list by initializing a
zero-value slice for list.Items or return a non-panicking failure error), then
proceed to build adapterMap and iterate list.Items; update both match methods to
use the same safe-nil handling.
In `@testdata/adapter-configs/cl-maestro/adapter-task-config.yaml`:
- Around line 169-190: The Finalized status expression currently only checks
that resources.?resource0 is absent, which can let Finalized=True even when
discovered spoke resources (e.g., namespace0, configmap0) still exist; update
the Finalized condition, reason, and message expressions in the "Finalized"
status block so they validate that all relevant spoke resources are absent
(e.g., check resources.?namespace0.hasValue() and
resources.?configmap0.hasValue() in addition to resources.?resource0.hasValue())
before returning "True"/"CleanupConfirmed", and return
"False"/"CleanupInProgress" otherwise, while keeping the existing is_deleting
and adapter.?executionStatus.orValue("") == "success" checks.
---
Duplicate comments:
In `@e2e/nodepool/concurrent_creation.go`:
- Around line 36-40: Restore an explicit AfterEach that always calls
h.CleanupTestCluster(ctx, clusterID) (guarded by checks for h == nil or
clusterID == ""), and move the readiness/teardown checks in
ginkgo.DeferCleanup/ginkgo hooks to be best-effort (log warnings rather than
hard-fail) so they don't prevent the AfterEach cleanup from running;
specifically, re-add an AfterEach that calls h.CleanupTestCluster, ensure any
readiness check code referenced by teardown hooks does not call t.Fatal/Fail or
return errors that stop cleanup, and update the ginkgo.DeferCleanup block and
any teardown readiness logic to only log warnings on failure (using
ginkgo.GinkgoWriter.Printf or similar) while leaving CleanupTestCluster as the
guaranteed cleanup path.
In `@e2e/nodepool/creation.go`:
- Around line 34-38: Move the teardown to a proper AfterEach by replacing the
fatal Should(...) call and the ginkgo.DeferCleanup usage with an AfterEach that
performs non-fatal cleanup: in AfterEach check if h == nil or clusterID == ""
and return early; otherwise call h.CleanupTestCluster(ctx, clusterID) and log
any error with ginkgo.GinkgoWriter.Printf("Warning: failed to cleanup cluster
%s: %v\n", clusterID, err) instead of failing the test; also make the current
pre-cleanup readiness check non-fatal (do not use gomega.Should/Fatal) so
readiness failures only log warnings and do not abort teardown.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Enterprise
Run ID: 9ccdf895-eb42-4390-b82b-1e96cc5c477e
📒 Files selected for processing (29)
CLAUDE.mddocs/development.mde2e/cluster/concurrent_creation.goe2e/cluster/creation.goe2e/cluster/delete.goe2e/cluster/update.goe2e/nodepool/concurrent_creation.goe2e/nodepool/creation.goe2e/nodepool/delete.goe2e/nodepool/update.gopkg/client/client.gopkg/client/cluster.gopkg/client/constants.gopkg/client/nodepool.gopkg/helper/helper.gopkg/helper/matchers.gopkg/helper/pollers.gopkg/helper/wait.gotest-design/testcases/delete-cluster.mdtest-design/testcases/delete-nodepool.mdtest-design/testcases/update-cluster.mdtest-design/testcases/update-nodepool.mdtestdata/adapter-configs/cl-deployment/adapter-task-config.yamltestdata/adapter-configs/cl-job/adapter-task-config.yamltestdata/adapter-configs/cl-maestro/adapter-task-config.yamltestdata/adapter-configs/cl-namespace/adapter-task-config.yamltestdata/adapter-configs/np-configmap/adapter-task-config.yamltestdata/payloads/clusters/cluster-patch.jsontestdata/payloads/nodepools/nodepool-patch.json
💤 Files with no reviewable changes (1)
- pkg/helper/wait.go
✅ Files skipped from review due to trivial changes (11)
- testdata/payloads/nodepools/nodepool-patch.json
- testdata/payloads/clusters/cluster-patch.json
- e2e/cluster/concurrent_creation.go
- test-design/testcases/update-cluster.md
- e2e/nodepool/update.go
- e2e/cluster/creation.go
- e2e/cluster/update.go
- pkg/helper/pollers.go
- test-design/testcases/update-nodepool.md
- test-design/testcases/delete-cluster.md
- CLAUDE.md
🚧 Files skipped from review as they are similar to previous changes (4)
- pkg/client/constants.go
- test-design/testcases/delete-nodepool.md
- pkg/client/client.go
- docs/development.md
There was a problem hiding this comment.
♻️ Duplicate comments (3)
pkg/client/cluster.go (1)
76-122:⚠️ Potential issue | 🔴 Critical | ⚡ Quick winBuild is blocked until cluster delete/patch generated methods are present
Line 76, Line 94, and Line 122 call
DeleteClusterById/PatchClusterById, and typecheck shows they are unresolved onHyperFleetClient. Please regenerate/sync the OpenAPI client and confirm method names/signatures match these call sites.Use this read-only check to confirm whether the generated methods exist and match wrappers:
#!/bin/bash set -euo pipefail echo "== HyperFleetClient definition ==" rg -n --type go 'type HyperFleetClient struct' -A12 pkg/client/client.go echo echo "== Generated cluster methods on openapi.Client ==" rg -n --type go 'func \(c \*Client\) (DeleteClusterById|PatchClusterById)\(' || true echo echo "== Cluster wrapper call sites ==" rg -n --type go 'DeleteClusterById\(|PatchClusterById\(' pkg/client/cluster.goExpected result: both method definitions are found in generated OpenAPI client code; if not, regenerate OpenAPI client sources and align wrapper calls with generated method names.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@pkg/client/cluster.go` around lines 76 - 122, The wrapper methods on HyperFleetClient call generated OpenAPI methods DeleteClusterById and PatchClusterById which are currently unresolved; regenerate/sync the OpenAPI client code and/or update the wrapper to match the actual generated method names/signatures: verify the generated client has methods named DeleteClusterById and PatchClusterById on the openapi Client, confirm their parameter and return types match how HyperFleetClient.PatchCluster, PatchClusterRaw and DeleteCluster call them, then either regenerate the OpenAPI sources or rename/adjust the wrapper calls and adapters so HyperFleetClient compiles against the true generated method names and signatures.pkg/client/nodepool.go (1)
76-122:⚠️ Potential issue | 🔴 Critical | ⚡ Quick winSame generated-client blocker applies to nodepool delete/patch APIs
Line 76 and Line 94/122 rely on
DeleteNodePoolById/PatchNodePoolById; these must exist on the generated OpenAPI client for this wrapper to compile.Verify method presence/signatures with this read-only check:
#!/bin/bash set -euo pipefail echo "== HyperFleetClient definition ==" rg -n --type go 'type HyperFleetClient struct' -A12 pkg/client/client.go echo echo "== Generated nodepool methods on openapi.Client ==" rg -n --type go 'func \(c \*Client\) (DeleteNodePoolById|PatchNodePoolById)\(' || true echo echo "== Nodepool wrapper call sites ==" rg -n --type go 'DeleteNodePoolById\(|PatchNodePoolById\(' pkg/client/nodepool.goExpected result: generated method definitions exist and signatures align with wrapper calls; otherwise regenerate/update OpenAPI client code and reconcile call sites.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@pkg/client/nodepool.go` around lines 76 - 122, The wrapper calls DeleteNodePoolById and PatchNodePoolById from the generated openapi Client (used in HyperFleetClient methods in pkg/client/nodepool.go) but those generated methods may be missing or have different signatures; verify the generated client defines func (c *Client) DeleteNodePoolById(...) and PatchNodePoolById(...) with the exact parameters/returns used here, and if they are absent or signatures differ regenerate/update the OpenAPI client code (or adapt the wrapper to the actual generated method names/signatures), then update the wrapper call sites in PatchNodePool, PatchNodePoolRaw and DeleteNodePool code paths to match the correct method names and parameter order/types so the package compiles.testdata/adapter-configs/cl-maestro/adapter-task-config.yaml (1)
169-190:⚠️ Potential issue | 🟠 Major | ⚡ Quick winWait for nested spoke resources before reporting
Finalized=True.With
propagationPolicy: Backgroundon Line 76,resource0can disappear beforenamespace0andconfigmap0. That lets this condition mark cleanup complete while discovered spoke resources still exist.Suggested fix
- type: "Finalized" status: expression: | is_deleting && adapter.?executionStatus.orValue("") == "success" && !adapter.?resourcesSkipped.orValue(false) && !resources.?resource0.hasValue() + && !resources.?namespace0.hasValue() + && !resources.?configmap0.hasValue() ? "True" : "False" reason: expression: | !is_deleting ? "" - : !resources.?resource0.hasValue() + : !resources.?resource0.hasValue() + && !resources.?namespace0.hasValue() + && !resources.?configmap0.hasValue() ? "CleanupConfirmed" : "CleanupInProgress" message: expression: | !is_deleting ? "" - : !resources.?resource0.hasValue() + : !resources.?resource0.hasValue() + && !resources.?namespace0.hasValue() + && !resources.?configmap0.hasValue() ? "All resources deleted; cleanup confirmed" : "Deletion in progress; waiting for ManifestWork to be removed"🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@testdata/adapter-configs/cl-maestro/adapter-task-config.yaml` around lines 169 - 190, The Finalized status expression currently returns True when resources.?resource0.hasValue() is false, which can falsely mark cleanup complete under propagationPolicy: Background because nested spoke resources (e.g., namespace0, configmap0) may still exist; update the Finalized/status.expression (and related reason/message expressions) to require that all discovered spoke resources are absent before returning "True" — for example, replace the single resources.?resource0.hasValue() check with a combined check that namespace0 and configmap0 (and any other resources in the resources collection) do not have values (e.g., ensure resources.?resource0.hasValue() == false && resources.?namespace0.hasValue() == false && resources.?configmap0.hasValue() == false, or a generic "resources list is empty" condition) so Finalized only becomes True once every nested spoke resource is gone.
🧹 Nitpick comments (2)
e2e/cluster/update.go (1)
15-16: ⚡ Quick winNormalize suite title to the required naming format
The current title includes an extra bracketed tag (
[update]). Use the canonical format:[Suite: cluster] Description.Proposed change
-var _ = ginkgo.Describe("[Suite: cluster][update] Cluster Update Lifecycle", +var _ = ginkgo.Describe("[Suite: cluster] Cluster Update Lifecycle",As per coding guidelines,
e2e/**/*.go: Format test names as[Suite: component] Description(e.g.,[Suite: cluster] Create Cluster via API).🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@e2e/cluster/update.go` around lines 15 - 16, The Describe title string passed to ginkgo.Describe currently contains an extra bracketed tag (`"[Suite: cluster][update] Cluster Update Lifecycle"`); update that string used in the ginkgo.Describe call to the canonical format by removing the extra `[update]` tag so it reads `"[Suite: cluster] Cluster Update Lifecycle"` (locate the ginkgo.Describe invocation in this file and replace the title argument accordingly).e2e/nodepool/creation.go (1)
63-64: 💤 Low valueUse config values instead of hardcoded timeouts.
These timeouts are hardcoded rather than using configuration values. Consider using
h.Cfg.Polling.Intervaland a config-based timeout to maintain consistency with the rest of the test suite.- initStatusPollInterval := time.Second - initCheckTimeout := 3 * time.Second + initStatusPollInterval := h.Cfg.Polling.Interval + initCheckTimeout := h.Cfg.Timeouts.NodePool.InitialStatus // or appropriate config fieldAs per coding guidelines,
e2e/**/*.go: "Useh.Cfg.Timeouts.*andh.Cfg.Polling.*values from config instead of hardcoding timeouts."🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@e2e/nodepool/creation.go` around lines 63 - 64, Replace the hardcoded durations assigned to initStatusPollInterval and initCheckTimeout with the config-driven values: set initStatusPollInterval = h.Cfg.Polling.Interval and set initCheckTimeout = h.Cfg.Timeouts.<appropriateTimeoutName> (e.g., InitCheck or equivalent timeout entry in your config) ensuring both are time.Duration types; update the assignments in creation.go where initStatusPollInterval and initCheckTimeout are defined and import/use the h.Cfg fields consistently with other e2e tests.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Duplicate comments:
In `@pkg/client/cluster.go`:
- Around line 76-122: The wrapper methods on HyperFleetClient call generated
OpenAPI methods DeleteClusterById and PatchClusterById which are currently
unresolved; regenerate/sync the OpenAPI client code and/or update the wrapper to
match the actual generated method names/signatures: verify the generated client
has methods named DeleteClusterById and PatchClusterById on the openapi Client,
confirm their parameter and return types match how
HyperFleetClient.PatchCluster, PatchClusterRaw and DeleteCluster call them, then
either regenerate the OpenAPI sources or rename/adjust the wrapper calls and
adapters so HyperFleetClient compiles against the true generated method names
and signatures.
In `@pkg/client/nodepool.go`:
- Around line 76-122: The wrapper calls DeleteNodePoolById and PatchNodePoolById
from the generated openapi Client (used in HyperFleetClient methods in
pkg/client/nodepool.go) but those generated methods may be missing or have
different signatures; verify the generated client defines func (c *Client)
DeleteNodePoolById(...) and PatchNodePoolById(...) with the exact
parameters/returns used here, and if they are absent or signatures differ
regenerate/update the OpenAPI client code (or adapt the wrapper to the actual
generated method names/signatures), then update the wrapper call sites in
PatchNodePool, PatchNodePoolRaw and DeleteNodePool code paths to match the
correct method names and parameter order/types so the package compiles.
In `@testdata/adapter-configs/cl-maestro/adapter-task-config.yaml`:
- Around line 169-190: The Finalized status expression currently returns True
when resources.?resource0.hasValue() is false, which can falsely mark cleanup
complete under propagationPolicy: Background because nested spoke resources
(e.g., namespace0, configmap0) may still exist; update the
Finalized/status.expression (and related reason/message expressions) to require
that all discovered spoke resources are absent before returning "True" — for
example, replace the single resources.?resource0.hasValue() check with a
combined check that namespace0 and configmap0 (and any other resources in the
resources collection) do not have values (e.g., ensure
resources.?resource0.hasValue() == false && resources.?namespace0.hasValue() ==
false && resources.?configmap0.hasValue() == false, or a generic "resources list
is empty" condition) so Finalized only becomes True once every nested spoke
resource is gone.
---
Nitpick comments:
In `@e2e/cluster/update.go`:
- Around line 15-16: The Describe title string passed to ginkgo.Describe
currently contains an extra bracketed tag (`"[Suite: cluster][update] Cluster
Update Lifecycle"`); update that string used in the ginkgo.Describe call to the
canonical format by removing the extra `[update]` tag so it reads `"[Suite:
cluster] Cluster Update Lifecycle"` (locate the ginkgo.Describe invocation in
this file and replace the title argument accordingly).
In `@e2e/nodepool/creation.go`:
- Around line 63-64: Replace the hardcoded durations assigned to
initStatusPollInterval and initCheckTimeout with the config-driven values: set
initStatusPollInterval = h.Cfg.Polling.Interval and set initCheckTimeout =
h.Cfg.Timeouts.<appropriateTimeoutName> (e.g., InitCheck or equivalent timeout
entry in your config) ensuring both are time.Duration types; update the
assignments in creation.go where initStatusPollInterval and initCheckTimeout are
defined and import/use the h.Cfg fields consistently with other e2e tests.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Enterprise
Run ID: 308c4bb9-35ac-4649-a9c6-8de8aa177672
📒 Files selected for processing (29)
CLAUDE.mddocs/development.mde2e/cluster/concurrent_creation.goe2e/cluster/creation.goe2e/cluster/delete.goe2e/cluster/update.goe2e/nodepool/concurrent_creation.goe2e/nodepool/creation.goe2e/nodepool/delete.goe2e/nodepool/update.gopkg/client/client.gopkg/client/cluster.gopkg/client/constants.gopkg/client/nodepool.gopkg/helper/helper.gopkg/helper/matchers.gopkg/helper/pollers.gopkg/helper/wait.gotest-design/testcases/delete-cluster.mdtest-design/testcases/delete-nodepool.mdtest-design/testcases/update-cluster.mdtest-design/testcases/update-nodepool.mdtestdata/adapter-configs/cl-deployment/adapter-task-config.yamltestdata/adapter-configs/cl-job/adapter-task-config.yamltestdata/adapter-configs/cl-maestro/adapter-task-config.yamltestdata/adapter-configs/cl-namespace/adapter-task-config.yamltestdata/adapter-configs/np-configmap/adapter-task-config.yamltestdata/payloads/clusters/cluster-patch.jsontestdata/payloads/nodepools/nodepool-patch.json
💤 Files with no reviewable changes (1)
- pkg/helper/wait.go
✅ Files skipped from review due to trivial changes (12)
- test-design/testcases/delete-nodepool.md
- testdata/payloads/clusters/cluster-patch.json
- testdata/payloads/nodepools/nodepool-patch.json
- pkg/client/constants.go
- test-design/testcases/update-nodepool.md
- pkg/helper/helper.go
- e2e/nodepool/delete.go
- testdata/adapter-configs/np-configmap/adapter-task-config.yaml
- pkg/client/client.go
- test-design/testcases/delete-cluster.md
- testdata/adapter-configs/cl-job/adapter-task-config.yaml
- CLAUDE.md
🚧 Files skipped from review as they are similar to previous changes (8)
- test-design/testcases/update-cluster.md
- e2e/cluster/concurrent_creation.go
- pkg/helper/pollers.go
- e2e/cluster/creation.go
- pkg/helper/matchers.go
- e2e/cluster/delete.go
- e2e/nodepool/update.go
- testdata/adapter-configs/cl-deployment/adapter-task-config.yaml
|
/test ? |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
testdata/adapter-configs/np-configmap/adapter-task-config.yaml (1)
143-164: ⚡ Quick winKeep
Finalized.reason/messagealigned with thestatusgate.
Finalized.statusonly becomesTruewhen deletion is in progress, execution succeeded, and no resources were skipped.reason/messageignore those checks, so this block can reportCleanupConfirmedwhilestatusis stillFalsebecause execution failed or resources were skipped. That makes failures hard to interpret.Suggested adjustment
- type: "Finalized" status: expression: | is_deleting && adapter.?executionStatus.orValue("") == "success" && !adapter.?resourcesSkipped.orValue(false) && !resources.?nodepoolConfigMap.hasValue() ? "True" : "False" reason: expression: | !is_deleting ? "" - : !resources.?nodepoolConfigMap.hasValue() - ? "CleanupConfirmed" - : "CleanupInProgress" + : adapter.?executionStatus.orValue("") != "success" + ? "ExecutionFailed:" + adapter.?executionError.?phase.orValue("unknown") + : adapter.?resourcesSkipped.orValue(false) + ? "ResourcesSkipped" + : !resources.?nodepoolConfigMap.hasValue() + ? "CleanupConfirmed" + : "CleanupInProgress" message: expression: | !is_deleting ? "" - : !resources.?nodepoolConfigMap.hasValue() - ? "All resources deleted; cleanup confirmed" - : "Deletion in progress; waiting for configmap to be removed" + : adapter.?executionStatus.orValue("") != "success" + ? "Adapter deletion failed" + : adapter.?resourcesSkipped.orValue(false) + ? "Deletion skipped resources" + : !resources.?nodepoolConfigMap.hasValue() + ? "All resources deleted; cleanup confirmed" + : "Deletion in progress; waiting for configmap to be removed"🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@testdata/adapter-configs/np-configmap/adapter-task-config.yaml` around lines 143 - 164, The Finalized.reason and Finalized.message expressions are not gated the same way as Finalized.status and can report CleanupConfirmed while status is False; update the reason and message expressions to include the same leading guard used by status (is_deleting && adapter.?executionStatus.orValue("") == "success' && !adapter.?resourcesSkipped.orValue(false) && !resources.?nodepoolConfigMap.hasValue()) so they only return "CleanupConfirmed" when that full condition is met (otherwise return the appropriate fallback like "CleanupInProgress" or ""), keeping the same references to Finalized.status, Finalized.reason, Finalized.message, adapter.?executionStatus, adapter.?resourcesSkipped, resources.?nodepoolConfigMap and is_deleting.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Nitpick comments:
In `@testdata/adapter-configs/np-configmap/adapter-task-config.yaml`:
- Around line 143-164: The Finalized.reason and Finalized.message expressions
are not gated the same way as Finalized.status and can report CleanupConfirmed
while status is False; update the reason and message expressions to include the
same leading guard used by status (is_deleting &&
adapter.?executionStatus.orValue("") == "success' &&
!adapter.?resourcesSkipped.orValue(false) &&
!resources.?nodepoolConfigMap.hasValue()) so they only return "CleanupConfirmed"
when that full condition is met (otherwise return the appropriate fallback like
"CleanupInProgress" or ""), keeping the same references to Finalized.status,
Finalized.reason, Finalized.message, adapter.?executionStatus,
adapter.?resourcesSkipped, resources.?nodepoolConfigMap and is_deleting.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Enterprise
Run ID: 29625ec7-7508-4cae-b989-a2deeeb57fc7
📒 Files selected for processing (31)
CLAUDE.mddocs/architecture.mddocs/development.mde2e/cluster/concurrent_creation.goe2e/cluster/creation.goe2e/cluster/delete.goe2e/cluster/update.goe2e/nodepool/concurrent_creation.goe2e/nodepool/creation.goe2e/nodepool/delete.goe2e/nodepool/update.gopkg/client/client.gopkg/client/cluster.gopkg/client/constants.gopkg/client/nodepool.gopkg/helper/helper.gopkg/helper/matchers.gopkg/helper/pollers.gopkg/helper/validation.gopkg/helper/wait.gotest-design/testcases/delete-cluster.mdtest-design/testcases/delete-nodepool.mdtest-design/testcases/update-cluster.mdtest-design/testcases/update-nodepool.mdtestdata/adapter-configs/cl-deployment/adapter-task-config.yamltestdata/adapter-configs/cl-job/adapter-task-config.yamltestdata/adapter-configs/cl-maestro/adapter-task-config.yamltestdata/adapter-configs/cl-namespace/adapter-task-config.yamltestdata/adapter-configs/np-configmap/adapter-task-config.yamltestdata/payloads/clusters/cluster-patch.jsontestdata/payloads/nodepools/nodepool-patch.json
💤 Files with no reviewable changes (2)
- pkg/helper/wait.go
- pkg/helper/helper.go
✅ Files skipped from review due to trivial changes (11)
- testdata/payloads/nodepools/nodepool-patch.json
- test-design/testcases/delete-nodepool.md
- test-design/testcases/delete-cluster.md
- e2e/nodepool/update.go
- testdata/adapter-configs/cl-namespace/adapter-task-config.yaml
- pkg/client/client.go
- testdata/payloads/clusters/cluster-patch.json
- e2e/cluster/concurrent_creation.go
- docs/architecture.md
- test-design/testcases/update-nodepool.md
- CLAUDE.md
🚧 Files skipped from review as they are similar to previous changes (9)
- test-design/testcases/update-cluster.md
- pkg/helper/pollers.go
- e2e/cluster/creation.go
- e2e/cluster/update.go
- e2e/cluster/delete.go
- testdata/adapter-configs/cl-maestro/adapter-task-config.yaml
- e2e/nodepool/delete.go
- testdata/adapter-configs/cl-job/adapter-task-config.yaml
- testdata/adapter-configs/cl-deployment/adapter-task-config.yaml
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (3)
e2e/cluster/update.go (1)
50-64: ⚡ Quick winAlso assert
Available=Trueat the patched generation.This spec currently passes once
Reconciled=Trueis observed forexpectedGen. A regression where the update path never refreshesAvailablefor the new generation would still slip through.Suggested tightening
ginkgo.By("verifying cluster reaches Reconciled=True at new generation") Eventually(func(g Gomega) { finalCluster, err := h.Client.GetCluster(ctx, clusterID) g.Expect(err).NotTo(HaveOccurred()) g.Expect(finalCluster.Generation).To(Equal(expectedGen), "final cluster generation should match expected") - found := false + reconciledFound := false + availableFound := false for _, cond := range finalCluster.Status.Conditions { if cond.Type == client.ConditionTypeReconciled && cond.Status == openapi.ResourceConditionStatusTrue { - found = true + reconciledFound = true g.Expect(cond.ObservedGeneration).To(Equal(expectedGen), "Reconciled condition observed_generation should match expected") } + if cond.Type == client.ConditionTypeAvailable && cond.Status == openapi.ResourceConditionStatusTrue { + availableFound = true + g.Expect(cond.ObservedGeneration).To(Equal(expectedGen), "Available condition observed_generation should match expected") + } } - g.Expect(found).To(BeTrue(), "cluster should have Reconciled=True") + g.Expect(reconciledFound).To(BeTrue(), "cluster should have Reconciled=True") + g.Expect(availableFound).To(BeTrue(), "cluster should have Available=True") }, h.Cfg.Timeouts.Cluster.Reconciled, h.Cfg.Polling.Interval).Should(Succeed())🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@e2e/cluster/update.go` around lines 50 - 64, The test currently only asserts Reconciled=True at expectedGen; extend the Eventually block (after retrieving finalCluster via h.Client.GetCluster and alongside the Reconciled checks) to also verify there is an Available condition (cond.Type == client.ConditionTypeAvailable) with Status == openapi.ResourceConditionStatusTrue and that its ObservedGeneration equals expectedGen, and fail the test if no such Available=True condition is found (similar to the existing Reconciled check).e2e/cluster/delete.go (1)
17-18: ⚡ Quick winKeep both suite titles in the required
[Suite: component] Descriptionformat.The extra
[delete]segment makes these names drift from the repo’s documented suite naming convention.Suggested rename
-var _ = ginkgo.Describe("[Suite: cluster][delete] Cluster Deletion Lifecycle", +var _ = ginkgo.Describe("[Suite: cluster] Cluster Deletion Lifecycle",-var _ = ginkgo.Describe("[Suite: cluster][delete] Cluster Cascade Deletion", +var _ = ginkgo.Describe("[Suite: cluster] Cluster Cascade Deletion",As per coding guidelines, Test name format must be
[Suite: component] Description.Also applies to: 115-116
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@e2e/cluster/delete.go` around lines 17 - 18, The ginkgo.Describe test titles include an extra "[delete]" segment that violates the required "[Suite: component] Description" format; update the ginkgo.Describe calls (e.g., the top-level ginkgo.Describe in e2e/cluster/delete.go and the other Describe usages around the same file) to remove the "[delete]" token so the titles read only in the "[Suite: component] Description" form, preserving the existing component label (ginkgo.Label(labels.Tier0)) and the human-readable description text.docs/development.md (1)
65-112: ⚡ Quick winMake the template
BeforeEachcontext-aware.This is the copy-paste starter example, so keeping
ginkgo.BeforeEach(func() { ... })here will steer new tests away from the required lifecycle-block signature.Suggested doc fix
- ginkgo.BeforeEach(func() { + ginkgo.BeforeEach(func(ctx context.Context) { h = helper.New() })As per coding guidelines,
Test structure must include ginkgo.BeforeEach, ginkgo.It, and ginkgo.AfterEach blocks with proper context handlingandAll test functions must receive context.Context parameter and propagate it to API calls.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@docs/development.md` around lines 65 - 112, Update the ginkgo.BeforeEach block to use the context-aware lifecycle signature so it matches other lifecycle functions and propagates context: change ginkgo.BeforeEach(func() { h = helper.New() }) to ginkgo.BeforeEach(func(ctx context.Context) { h = helper.New() }) (keeping the same body), ensuring the existing import of context is used; this makes the BeforeEach, ginkgo.It, and ginkgo.AfterEach signatures consistent and allows future context usage in helper.New or setup steps.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@CLAUDE.md`:
- Around line 139-144: The ginkgo By step text is inconsistent with the matcher:
change the step description used in ginkgo.By to reference "Reconciled" instead
of "Ready" so it matches the ConditionTypeReconciled check; update the string
passed to ginkgo.By (around the Eventually call that uses h.PollCluster(ctx,
clusterID) and helper.HaveResourceCondition with client.ConditionTypeReconciled)
to a message like "waiting for cluster to become Reconciled".
In `@e2e/nodepool/delete.go`:
- Around line 91-112: The test assumes PATCH must return 409 but races with
hard-delete which may return 404; update the PatchNodePoolRaw assertion in the
ginkgo.It to accept either http.StatusConflict (409) or http.StatusNotFound
(404), and if PATCH returns 404 skip the subsequent GetNodePool
generation/DeletedTime assertions (or assert that GetNodePool returns 404 as
well), referencing PatchNodePoolRaw, GetNodePool, deletedGeneration and
DeletedTime so the test handles both outcomes without flaking.
---
Nitpick comments:
In `@docs/development.md`:
- Around line 65-112: Update the ginkgo.BeforeEach block to use the
context-aware lifecycle signature so it matches other lifecycle functions and
propagates context: change ginkgo.BeforeEach(func() { h = helper.New() }) to
ginkgo.BeforeEach(func(ctx context.Context) { h = helper.New() }) (keeping the
same body), ensuring the existing import of context is used; this makes the
BeforeEach, ginkgo.It, and ginkgo.AfterEach signatures consistent and allows
future context usage in helper.New or setup steps.
In `@e2e/cluster/delete.go`:
- Around line 17-18: The ginkgo.Describe test titles include an extra "[delete]"
segment that violates the required "[Suite: component] Description" format;
update the ginkgo.Describe calls (e.g., the top-level ginkgo.Describe in
e2e/cluster/delete.go and the other Describe usages around the same file) to
remove the "[delete]" token so the titles read only in the "[Suite: component]
Description" form, preserving the existing component label
(ginkgo.Label(labels.Tier0)) and the human-readable description text.
In `@e2e/cluster/update.go`:
- Around line 50-64: The test currently only asserts Reconciled=True at
expectedGen; extend the Eventually block (after retrieving finalCluster via
h.Client.GetCluster and alongside the Reconciled checks) to also verify there is
an Available condition (cond.Type == client.ConditionTypeAvailable) with Status
== openapi.ResourceConditionStatusTrue and that its ObservedGeneration equals
expectedGen, and fail the test if no such Available=True condition is found
(similar to the existing Reconciled check).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Enterprise
Run ID: c3a62736-a85e-4e8f-aed3-b16f99eafd6f
📒 Files selected for processing (15)
CLAUDE.mddocs/architecture.mddocs/development.mde2e/cluster/concurrent_creation.goe2e/cluster/creation.goe2e/cluster/delete.goe2e/cluster/update.goe2e/nodepool/concurrent_creation.goe2e/nodepool/creation.goe2e/nodepool/delete.goe2e/nodepool/update.gopkg/helper/helper.gopkg/helper/matchers.gopkg/helper/validation.gopkg/helper/wait.go
💤 Files with no reviewable changes (2)
- pkg/helper/helper.go
- pkg/helper/wait.go
🚧 Files skipped from review as they are similar to previous changes (1)
- e2e/nodepool/update.go
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@e2e/cluster/delete.go`:
- Around line 92-98: The 409 branch assumes GetCluster(ctx, clusterID) must
succeed but the cluster may hard-delete immediately; update the check in the
HTTP 409 branch (where httpErr.StatusCode == http.StatusConflict) to tolerate a
not-found outcome: call h.Client.GetCluster(ctx, clusterID) and if it returns a
not-found error accept that as OK, otherwise assert no error and then verify
cluster.Generation equals deletedGeneration and cluster.DeletedTime is non-nil;
reference the existing GetCluster call, httpErr.StatusCode, deletedGeneration,
and cluster.DeletedTime when implementing this conditional handling.
In `@e2e/cluster/update.go`:
- Around line 40-64: The test currently only checks generation bump and
reconciliation but never verifies that the actual field from cluster-patch.json
was applied; after calling PatchClusterFromPayload (patchedCluster) and/or after
the final GetCluster(...) (finalCluster) assert that a specific patched field
(e.g., the name/label/annotation/value you changed in cluster-patch.json) equals
the expected value from the payload; add the assertion immediately after
receiving patchedCluster and again (or instead) inside the eventual block that
fetches finalCluster to ensure the persisted value matches the patch once
adapters have reconciled.
In `@e2e/nodepool/delete.go`:
- Around line 109-115: The 409 validation currently assumes GetNodePool will
succeed; however the nodepool may be hard-deleted before the GET and return
NotFound, flaking the test. Update the 409 branch around the GetNodePool call
(the logic referencing h.Client.GetNodePool, np.Generation, deletedGeneration,
and np.DeletedTime) to accept either: (a) a successful GET where Generation ==
deletedGeneration and DeletedTime != nil, or (b) a GET that returns a
NotFound/404 error (treat as valid hard-delete outcome). Implement this by
calling h.Client.GetNodePool, checking the error: if it's a NotFound/404 treat
the case as expected and return; otherwise assert no error and then assert
Generation and DeletedTime as before.
In `@pkg/helper/matchers.go`:
- Around line 169-180: In extractResourceConditions, guard against a nil Status
before accessing Conditions: in the switch cases for *openapi.Cluster and
*openapi.NodePool (and any similar branches), check if v.Status == nil and if so
return nil, nil (or an appropriate nil slice and nil error) before returning
v.Status.Conditions; update extractResourceConditions to perform this nil-status
check for both Cluster and NodePool to avoid panics during early polls.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Enterprise
Run ID: ca85bd34-b1d5-4821-82d8-8b4874b7d695
📒 Files selected for processing (17)
CLAUDE.mddocs/architecture.mddocs/development.mde2e/cluster/concurrent_creation.goe2e/cluster/creation.goe2e/cluster/delete.goe2e/cluster/update.goe2e/nodepool/concurrent_creation.goe2e/nodepool/creation.goe2e/nodepool/delete.goe2e/nodepool/update.gopkg/client/cluster.gopkg/client/nodepool.gopkg/helper/helper.gopkg/helper/matchers.gopkg/helper/validation.gopkg/helper/wait.go
💤 Files with no reviewable changes (2)
- pkg/helper/helper.go
- pkg/helper/wait.go
🚧 Files skipped from review as they are similar to previous changes (3)
- e2e/nodepool/concurrent_creation.go
- pkg/helper/validation.go
- CLAUDE.md
|
/test all |
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (4)
pkg/helper/matchers.go (1)
169-184:⚠️ Potential issue | 🟠 Major | ⚡ Quick win
v.Statusnil dereference will panic instead of retrying on early polls.Both
*openapi.Clusterand*openapi.NodePoolcases guardv == nilbut notv.Status == nil. SinceStatusis a pointer type (test code assertscluster.Status != nil), an early-poll response with an unpopulatedStatuswill panic atv.Status.Conditionsinstead of returningniland lettingEventuallyretry.🛡️ Proposed fix
case *openapi.Cluster: - if v == nil { + if v == nil || v.Status == nil { return nil, nil } return v.Status.Conditions, nil case *openapi.NodePool: - if v == nil { + if v == nil || v.Status == nil { return nil, nil } return v.Status.Conditions, nil🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@pkg/helper/matchers.go` around lines 169 - 184, The function extractResourceConditions can panic by dereferencing v.Status when Status is nil; update both type branches for *openapi.Cluster and *openapi.NodePool in extractResourceConditions to check if v.Status == nil and return nil, nil before accessing v.Status.Conditions so early-poll responses without Status will retry instead of causing a panic.e2e/nodepool/delete.go (1)
109-115:⚠️ Potential issue | 🟠 Major | ⚡ Quick winHard-delete race still not guarded in the 409 verification branch.
On line 112,
GetNodePoolis called after confirming a 409 PATCH rejection, but the nodepool can be hard-deleted before this GET returns. The uncheckedExpect(err).NotTo(HaveOccurred())on line 112 then fails on 404, causing a flake.🛡️ Proposed fix (matches the pattern already used elsewhere in this file)
if httpErr.StatusCode == http.StatusConflict { ginkgo.By("verifying nodepool state is unchanged after rejected PATCH") np, err := h.Client.GetNodePool(ctx, clusterID, nodepoolID) + if err != nil { + var getHTTPErr *client.HTTPError + if errors.As(err, &getHTTPErr) && getHTTPErr.StatusCode == http.StatusNotFound { + return // hard-delete raced the 409 – deletion is still correct behaviour + } + } Expect(err).NotTo(HaveOccurred()) Expect(np.Generation).To(Equal(deletedGeneration), "generation should not change after rejected PATCH") Expect(np.DeletedTime).NotTo(BeNil(), "nodepool should still be marked as deleted") }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@e2e/nodepool/delete.go` around lines 109 - 115, The 409 branch calls h.Client.GetNodePool and unconditionally Expect(err).NotTo(HaveOccurred()), which flakes if the nodepool was hard-deleted (404); change the 409 verification to handle a transient NotFound by polling like other checks: use gomega.Eventually to call h.Client.GetNodePool(ctx, clusterID, nodepoolID) until it either returns the expected nodepool with Generation == deletedGeneration and DeletedTime != nil, or returns a 404 (treat 404 as acceptable terminal state), and assert accordingly; update the block around the httpErr.StatusCode == http.StatusConflict check to use Eventually and to accept a NotFound error as a non-flaky outcome.e2e/cluster/update.go (1)
40-44:⚠️ Potential issue | 🟠 Major | ⚡ Quick winTest never validates that the patch field was actually persisted.
After
PatchClusterFromPayload, the test only assertspatchedCluster.Generationincremented. Any concrete field changed incluster-patch.jsonis never verified. The test passes even if the server bumps generation but ignores the payload.Add at least one assertion on the patched field, e.g.:
// After PatchClusterFromPayload, verify the mutated field: Expect(patchedCluster.Spec[/* patched key */]).To(Equal(/* expected value */))Or, equivalently, re-read with
GetClusterafter reconciliation inside the finalEventuallyblock and assert the field there.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@e2e/cluster/update.go` around lines 40 - 44, The test currently only checks Generation after calling PatchClusterFromPayload; update it to also assert that a concrete patched field from the payload was persisted: after calling h.Client.PatchClusterFromPayload (and/or inside the existing Eventually reconciliation block) read the cluster (using patchedCluster returned or h.Client.GetCluster) and assert the specific mutated field from "payloads/clusters/cluster-patch.json" equals the expected value; reference the variables patchedCluster, clusterBefore.Generation and methods PatchClusterFromPayload and GetCluster to locate where to add the Expect for the patched Spec key.e2e/cluster/delete.go (1)
92-98:⚠️ Potential issue | 🟠 Major | ⚡ Quick winHandle immediate hard-delete in the 409 branch to avoid flakes.
On Line 94,
GetClusteris required to succeed after a409, but the cluster may hard-delete between calls. Treat404as acceptable terminal state here too.Suggested minimal fix
if httpErr.StatusCode == http.StatusConflict { ginkgo.By("verifying cluster state is unchanged after rejected PATCH") cluster, err := h.Client.GetCluster(ctx, clusterID) + if err != nil { + var getHTTPErr *client.HTTPError + if errors.As(err, &getHTTPErr) && getHTTPErr.StatusCode == http.StatusNotFound { + return + } + } Expect(err).NotTo(HaveOccurred()) Expect(cluster.Generation).To(Equal(deletedGeneration), "generation should not change after rejected PATCH") Expect(cluster.DeletedTime).NotTo(BeNil(), "cluster should still be marked as deleted") }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@e2e/cluster/delete.go` around lines 92 - 98, The 409 branch assumes GetCluster will always succeed but the cluster may have been hard-deleted; after calling h.Client.GetCluster(ctx, clusterID) handle a 404 as an acceptable terminal state: if the GetCluster error indicates NotFound then return/pass the check, otherwise require no error and assert cluster.Generation equals deletedGeneration and cluster.DeletedTime is not nil. Update the logic around the GetCluster call (referencing GetCluster, cluster, deletedGeneration and httpErr.StatusCode) to accept a 404 result as valid instead of failing the test.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@e2e/cluster/delete.go`:
- Around line 17-18: The test suite name string passed to ginkgo.Describe uses
the wrong format ("[Suite: cluster][delete] Cluster Deletion Lifecycle"); update
the Describe invocation(s) (e.g., the ginkgo.Describe call in this file) to
follow the required "[Suite: component] Description" pattern by removing the
extra bracketed action and moving "delete" into the description text — e.g.,
change to "[Suite: cluster] Cluster Deletion Lifecycle" or "[Suite: cluster]
Delete Cluster via API"; apply the same fix to the other similar Describe usages
in this file that use "[Suite: cluster][...]" format.
- Around line 101-114: Replace the warning-only cleanup logic in the AfterEach
closure with test assertions: when calling h.Client.GetCluster(ctx, clusterID)
and checking cluster.DeletedTime use Expect to assert no unexpected errors
occurred, and when calling h.Client.DeleteCluster(ctx, clusterID) assert the
call either succeeds or returns a not-found (HTTP 404) error; finally replace
the ginkgo.GinkgoWriter.Printf on h.CleanupTestCluster(ctx, clusterID) with
Expect(err).NotTo(HaveOccurred()) so cleanup failures fail the test. Reference
the AfterEach anonymous function, h.Client.GetCluster, h.Client.DeleteCluster,
cluster.DeletedTime and h.CleanupTestCluster when locating and updating the
code.
---
Duplicate comments:
In `@e2e/cluster/delete.go`:
- Around line 92-98: The 409 branch assumes GetCluster will always succeed but
the cluster may have been hard-deleted; after calling h.Client.GetCluster(ctx,
clusterID) handle a 404 as an acceptable terminal state: if the GetCluster error
indicates NotFound then return/pass the check, otherwise require no error and
assert cluster.Generation equals deletedGeneration and cluster.DeletedTime is
not nil. Update the logic around the GetCluster call (referencing GetCluster,
cluster, deletedGeneration and httpErr.StatusCode) to accept a 404 result as
valid instead of failing the test.
In `@e2e/cluster/update.go`:
- Around line 40-44: The test currently only checks Generation after calling
PatchClusterFromPayload; update it to also assert that a concrete patched field
from the payload was persisted: after calling h.Client.PatchClusterFromPayload
(and/or inside the existing Eventually reconciliation block) read the cluster
(using patchedCluster returned or h.Client.GetCluster) and assert the specific
mutated field from "payloads/clusters/cluster-patch.json" equals the expected
value; reference the variables patchedCluster, clusterBefore.Generation and
methods PatchClusterFromPayload and GetCluster to locate where to add the Expect
for the patched Spec key.
In `@e2e/nodepool/delete.go`:
- Around line 109-115: The 409 branch calls h.Client.GetNodePool and
unconditionally Expect(err).NotTo(HaveOccurred()), which flakes if the nodepool
was hard-deleted (404); change the 409 verification to handle a transient
NotFound by polling like other checks: use gomega.Eventually to call
h.Client.GetNodePool(ctx, clusterID, nodepoolID) until it either returns the
expected nodepool with Generation == deletedGeneration and DeletedTime != nil,
or returns a 404 (treat 404 as acceptable terminal state), and assert
accordingly; update the block around the httpErr.StatusCode ==
http.StatusConflict check to use Eventually and to accept a NotFound error as a
non-flaky outcome.
In `@pkg/helper/matchers.go`:
- Around line 169-184: The function extractResourceConditions can panic by
dereferencing v.Status when Status is nil; update both type branches for
*openapi.Cluster and *openapi.NodePool in extractResourceConditions to check if
v.Status == nil and return nil, nil before accessing v.Status.Conditions so
early-poll responses without Status will retry instead of causing a panic.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Enterprise
Run ID: 10a14177-a2c5-447e-a828-3a130a1d26f4
📒 Files selected for processing (31)
CLAUDE.mddocs/architecture.mddocs/development.mde2e/cluster/concurrent_creation.goe2e/cluster/creation.goe2e/cluster/delete.goe2e/cluster/update.goe2e/nodepool/concurrent_creation.goe2e/nodepool/creation.goe2e/nodepool/delete.goe2e/nodepool/update.gopkg/client/client.gopkg/client/cluster.gopkg/client/constants.gopkg/client/nodepool.gopkg/helper/helper.gopkg/helper/matchers.gopkg/helper/pollers.gopkg/helper/validation.gopkg/helper/wait.gotest-design/testcases/delete-cluster.mdtest-design/testcases/delete-nodepool.mdtest-design/testcases/update-cluster.mdtest-design/testcases/update-nodepool.mdtestdata/adapter-configs/cl-deployment/adapter-task-config.yamltestdata/adapter-configs/cl-job/adapter-task-config.yamltestdata/adapter-configs/cl-maestro/adapter-task-config.yamltestdata/adapter-configs/cl-namespace/adapter-task-config.yamltestdata/adapter-configs/np-configmap/adapter-task-config.yamltestdata/payloads/clusters/cluster-patch.jsontestdata/payloads/nodepools/nodepool-patch.json
💤 Files with no reviewable changes (2)
- pkg/helper/helper.go
- pkg/helper/wait.go
✅ Files skipped from review due to trivial changes (10)
- test-design/testcases/delete-nodepool.md
- test-design/testcases/update-nodepool.md
- testdata/payloads/nodepools/nodepool-patch.json
- testdata/payloads/clusters/cluster-patch.json
- pkg/client/client.go
- testdata/adapter-configs/cl-namespace/adapter-task-config.yaml
- CLAUDE.md
- pkg/client/constants.go
- testdata/adapter-configs/cl-job/adapter-task-config.yaml
- test-design/testcases/delete-cluster.md
🚧 Files skipped from review as they are similar to previous changes (9)
- pkg/helper/validation.go
- e2e/nodepool/concurrent_creation.go
- testdata/adapter-configs/cl-maestro/adapter-task-config.yaml
- testdata/adapter-configs/cl-deployment/adapter-task-config.yaml
- test-design/testcases/update-cluster.md
- pkg/helper/pollers.go
- docs/development.md
- e2e/nodepool/update.go
- docs/architecture.md
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (3)
pkg/helper/matchers.go (1)
171-180:⚠️ Potential issue | 🟠 Major | ⚡ Quick winGuard nil
Statusbefore reading resource conditions.Line 175 and Line 180 dereference
v.Status.Conditionswithout checkingv.Status == nil. In early poll cycles this can panic and fail the assertion loop.Suggested fix
func extractResourceConditions(actual any) ([]openapi.ResourceCondition, error) { switch v := actual.(type) { case *openapi.Cluster: - if v == nil { + if v == nil || v.Status == nil { return nil, nil } return v.Status.Conditions, nil case *openapi.NodePool: - if v == nil { + if v == nil || v.Status == nil { return nil, nil } return v.Status.Conditions, nil default: return nil, fmt.Errorf("HaveResourceCondition expects *Cluster or *NodePool, got %T", actual) } }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@pkg/helper/matchers.go` around lines 171 - 180, In the Cluster and NodePool matcher branches (the case for *openapi.Cluster and the case for *openapi.NodePool) you must guard v.Status before accessing v.Status.Conditions: if v is nil return nil,nil as already done, then check if v.Status == nil and return nil,nil (or an empty slice as appropriate) to avoid panics during early poll cycles; update both branches to perform this nil-check before returning v.Status.Conditions so callers never dereference a nil Status.pkg/client/nodepool.go (1)
76-94:⚠️ Potential issue | 🔴 Critical | ⚡ Quick winBuild-blocking API mismatch on delete/patch nodepool calls.
Line 76 and Line 94 call
DeleteNodePoolById/PatchNodePoolById, but typecheck reports both as undefined on*HyperFleetClient. This blocks compilation and likely indicates generated OpenAPI client drift (same symptom also appears in cluster delete/patch calls).#!/bin/bash set -euo pipefail echo "== Wrapper call sites ==" rg -n -C2 --type=go 'c\.(DeleteNodePoolById|PatchNodePoolById|DeleteClusterById|PatchClusterById)\(' pkg/client echo echo "== Generated method definitions (if present) ==" rg -n -C2 --type=go '\bfunc\s+\(.*\)\s+(DeleteNodePoolById|PatchNodePoolById|DeleteClusterById|PatchClusterById)\b'Expected result: every wrapper call has a matching generated method signature; otherwise regenerate the OpenAPI client and align call names/signatures.
As per coding guidelines, "Runmake lintand verify zero errors before committing code."🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@pkg/client/nodepool.go` around lines 76 - 94, The compilation failures come from calling generated methods that don't exist on *HyperFleetClient (DeleteNodePoolById, PatchNodePoolById, DeleteClusterById, PatchClusterById); regenerate the OpenAPI client to restore the correct method names/signatures, then update the wrapper calls in functions like PatchNodePool and the delete nodepool/cluster wrappers to match the regenerated client API (or vice versa: rename the generated methods to the names used by these wrappers), and run make lint to ensure no mismatch remains; reference the wrapper functions (e.g., HyperFleetClient.PatchNodePool / DeleteNodePool) and the called symbols (DeleteNodePoolById, PatchNodePoolById, DeleteClusterById, PatchClusterById) when making the changes.e2e/cluster/update.go (1)
41-45:⚠️ Potential issue | 🟠 Major | ⚡ Quick winAssert a concrete patched field value.
Line 44 validates generation churn, but the test still doesn’t prove a specific value from
cluster-patch.jsonwas persisted. Add at least one direct field assertion onpatchedClusterorfinalCluster.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@e2e/cluster/update.go` around lines 41 - 45, The test only checks generation bump but not that a specific change from the payload persisted; open the payload at TestDataPath("payloads/clusters/cluster-patch.json"), pick at least one concrete field that the payload changes (for example a top-level spec or metadata field present in that JSON), and add an assertion such as Expect(patchedCluster.<FieldPath>).To(Equal(<expectedValueFromPayload>)) (or fetch finalCluster and assert the same on finalCluster.<FieldPath>) so the test verifies the actual patched field persisted; use the patchedCluster variable returned by PatchClusterFromPayload and reference clusterBefore or finalCluster only for context.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@e2e/cluster/update.go`:
- Line 57: The test iterates finalCluster.Status.Conditions inside an Eventually
polling loop without guarding against finalCluster.Status being nil; update the
predicate used by Eventually so it first checks that finalCluster != nil and
finalCluster.Status != nil (and optionally that
len(finalCluster.Status.Conditions) > 0) and return false to continue the retry
when Status is nil instead of ranging and panicking. Locate the Eventually block
that references finalCluster and change its predicate to early-return false when
finalCluster.Status is nil, then only iterate finalCluster.Status.Conditions
once the nil check passes.
In `@e2e/nodepool/update.go`:
- Around line 52-55: The test only checks generation increment after
PatchNodePoolFromPayload but not that the actual patch (nodepool-patch.json) was
applied; update the test to assert at least one concrete field from
nodepool-patch.json on the returned patchedNP (or call h.Client.GetNodePool to
fetch the current state) — for example assert patchedNP.<fieldName> equals the
value from nodepool-patch.json — using the same Expect(...) matcher style
already present so the test verifies the intended update was persisted.
- Around line 61-76: The one-shot read of finalNP.Status.Conditions can panic if
Status is nil and is racey for observedGeneration; replace the direct loop with
a retry that re-fetches the NodePool (use the same Eventually helper/polling
timeouts as the prior Reconciled check) and in each attempt nil-check finalNP
and finalNP.Status before iterating conditions, then assert that a condition
with Type == client.ConditionTypeReconciled exists and its ObservedGeneration
equals expectedGen; you can still use h.Client.GetNodePool and
h.HasResourceCondition inside the retry to locate the Reconciled condition
safely and fail only if the condition never appears with the expected
ObservedGeneration within the timeout.
---
Duplicate comments:
In `@e2e/cluster/update.go`:
- Around line 41-45: The test only checks generation bump but not that a
specific change from the payload persisted; open the payload at
TestDataPath("payloads/clusters/cluster-patch.json"), pick at least one concrete
field that the payload changes (for example a top-level spec or metadata field
present in that JSON), and add an assertion such as
Expect(patchedCluster.<FieldPath>).To(Equal(<expectedValueFromPayload>)) (or
fetch finalCluster and assert the same on finalCluster.<FieldPath>) so the test
verifies the actual patched field persisted; use the patchedCluster variable
returned by PatchClusterFromPayload and reference clusterBefore or finalCluster
only for context.
In `@pkg/client/nodepool.go`:
- Around line 76-94: The compilation failures come from calling generated
methods that don't exist on *HyperFleetClient (DeleteNodePoolById,
PatchNodePoolById, DeleteClusterById, PatchClusterById); regenerate the OpenAPI
client to restore the correct method names/signatures, then update the wrapper
calls in functions like PatchNodePool and the delete nodepool/cluster wrappers
to match the regenerated client API (or vice versa: rename the generated methods
to the names used by these wrappers), and run make lint to ensure no mismatch
remains; reference the wrapper functions (e.g., HyperFleetClient.PatchNodePool /
DeleteNodePool) and the called symbols (DeleteNodePoolById, PatchNodePoolById,
DeleteClusterById, PatchClusterById) when making the changes.
In `@pkg/helper/matchers.go`:
- Around line 171-180: In the Cluster and NodePool matcher branches (the case
for *openapi.Cluster and the case for *openapi.NodePool) you must guard v.Status
before accessing v.Status.Conditions: if v is nil return nil,nil as already
done, then check if v.Status == nil and return nil,nil (or an empty slice as
appropriate) to avoid panics during early poll cycles; update both branches to
perform this nil-check before returning v.Status.Conditions so callers never
dereference a nil Status.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Enterprise
Run ID: c7112546-8e86-46e5-8f30-2e5c89cc539f
📒 Files selected for processing (22)
CLAUDE.mddocs/architecture.mddocs/development.mde2e/cluster/concurrent_creation.goe2e/cluster/creation.goe2e/cluster/delete.goe2e/cluster/update.goe2e/nodepool/concurrent_creation.goe2e/nodepool/creation.goe2e/nodepool/delete.goe2e/nodepool/update.gopkg/client/cluster.gopkg/client/nodepool.gopkg/helper/helper.gopkg/helper/matchers.gopkg/helper/validation.gopkg/helper/wait.gotestdata/adapter-configs/cl-deployment/adapter-task-config.yamltestdata/adapter-configs/cl-job/adapter-task-config.yamltestdata/adapter-configs/cl-maestro/adapter-task-config.yamltestdata/adapter-configs/cl-namespace/adapter-task-config.yamltestdata/adapter-configs/np-configmap/adapter-task-config.yaml
💤 Files with no reviewable changes (2)
- pkg/helper/helper.go
- pkg/helper/wait.go
✅ Files skipped from review due to trivial changes (1)
- docs/architecture.md
🚧 Files skipped from review as they are similar to previous changes (13)
- e2e/nodepool/concurrent_creation.go
- e2e/cluster/creation.go
- testdata/adapter-configs/cl-maestro/adapter-task-config.yaml
- pkg/helper/validation.go
- testdata/adapter-configs/cl-namespace/adapter-task-config.yaml
- testdata/adapter-configs/np-configmap/adapter-task-config.yaml
- e2e/cluster/delete.go
- docs/development.md
- e2e/nodepool/delete.go
- CLAUDE.md
- e2e/nodepool/creation.go
- testdata/adapter-configs/cl-job/adapter-task-config.yaml
- testdata/adapter-configs/cl-deployment/adapter-task-config.yaml
0bb7491 to
b763c0b
Compare
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
…ters and nodepools
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
e2e/cluster/concurrent_creation.go (1)
154-169:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winCleanup assertion inconsistent with coding guidelines.
Line 168 uses
Expect(cleanupErrors).To(BeEmpty())which fails the test on cleanup errors. Other tests in this PR (e.g.,nodepool/concurrent_creation.go,nodepool/creation.go) only log cleanup failures as warnings. As per coding guidelines, cleanup failures should be logged as warnings instead of failing the test.Suggested fix
ginkgo.AfterEach(func(ctx context.Context) { if h == nil || len(clusterIDs) == 0 { return } ginkgo.By(fmt.Sprintf("Cleaning up %d test clusters", len(clusterIDs))) - var cleanupErrors []error for _, clusterID := range clusterIDs { ginkgo.By("cleaning up cluster " + clusterID) if err := h.CleanupTestCluster(ctx, clusterID); err != nil { ginkgo.GinkgoWriter.Printf("ERROR: failed to cleanup cluster %s: %v\n", clusterID, err) - cleanupErrors = append(cleanupErrors, err) } } - Expect(cleanupErrors).To(BeEmpty(), "some clusters failed to cleanup") })As per coding guidelines: "Always implement cleanup in
AfterEachand log cleanup failures as warnings instead of failing the test."🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@e2e/cluster/concurrent_creation.go` around lines 154 - 169, The AfterEach cleanup currently calls Expect(cleanupErrors).To(BeEmpty()) which fails the test on cleanup errors; change it to only log warnings instead. In the ginkgo.AfterEach closure that iterates clusterIDs and collects cleanupErrors (the block containing h.CleanupTestCluster and cleanupErrors slice), remove the Expect(...) assertion and add a conditional that, when len(cleanupErrors) > 0, writes a warning via ginkgo.GinkgoWriter.Printf (or similar) summarizing the number of failed cleanups and their error details; ensure clusterID and cleanupErrors are used in the message so failures are logged but do not fail the test.e2e/cluster/creation.go (1)
375-384:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winCleanup should log failures as warnings, not fail the test.
The
AfterEachblock usesExpect(err).NotTo(HaveOccurred())which will fail the test if cleanup encounters an error. As per coding guidelines, cleanup failures should be logged as warnings instead of failing the test.Suggested fix
ginkgo.AfterEach(func(ctx context.Context) { // Skip cleanup if helper not initialized or no cluster created if h == nil || clusterID == "" { return } ginkgo.By("cleaning up cluster " + clusterID) - err := h.CleanupTestCluster(ctx, clusterID) - Expect(err).NotTo(HaveOccurred(), "failed to cleanup cluster %s", clusterID) + if err := h.CleanupTestCluster(ctx, clusterID); err != nil { + ginkgo.GinkgoWriter.Printf("Warning: cleanup failed for cluster %s: %v\n", clusterID, err) + } })As per coding guidelines: "Always implement cleanup in
AfterEachand log cleanup failures as warnings instead of failing the test."🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@e2e/cluster/creation.go` around lines 375 - 384, Replace the failing assertion in the AfterEach cleanup with a warning log: where the code currently calls Expect(err).NotTo(HaveOccurred()) after calling h.CleanupTestCluster(ctx, clusterID), remove the Expect and instead log a warning that includes the error and the clusterID (for example via ginkgo.GinkgoWriter.Printf or the test helper's logger). Keep the existing guard (if h == nil || clusterID == "") and the call to h.CleanupTestCluster; only change the error handling so cleanup errors are logged as warnings (including err and clusterID) and do not cause the test to fail.
♻️ Duplicate comments (1)
pkg/helper/matchers.go (1)
169-184:⚠️ Potential issue | 🟠 Major | ⚡ Quick winGuard nil
Statusbefore readingConditions.The past review flagged that
extractResourceConditionscan panic whenv.Statusis nil (whilevitself is non-nil). This issue appears unresolved — Line 175 and Line 180 still dereferencev.Status.Conditionswithout checkingv.Status == nil. During early polls, the API may return a resource with a nilStatus, causing a panic instead of a graceful retry.Suggested fix
func extractResourceConditions(actual any) ([]openapi.ResourceCondition, error) { switch v := actual.(type) { case *openapi.Cluster: - if v == nil { + if v == nil || v.Status == nil { return nil, nil } return v.Status.Conditions, nil case *openapi.NodePool: - if v == nil { + if v == nil || v.Status == nil { return nil, nil } return v.Status.Conditions, nil default: return nil, fmt.Errorf("HaveResourceCondition expects *Cluster or *NodePool, got %T", actual) } }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@pkg/helper/matchers.go` around lines 169 - 184, extractResourceConditions can panic by dereferencing v.Status when Status is nil; update the function to guard v.Status for both case branches (when handling *openapi.Cluster and *openapi.NodePool) by returning nil, nil if v == nil OR v.Status == nil before accessing v.Status.Conditions. Modify the branches in extractResourceConditions so they check v.Status == nil and only return v.Status.Conditions when Status is non-nil; keep the default error return unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@CLAUDE.md`:
- Around line 156-193: The fenced code blocks in the documentation lack
surrounding blank lines, causing markdownlint MD031; add a single blank line
before and after each triple-backtick fence around the examples (the blocks
containing Eventually(...) and the other examples) so that each fenced block is
separated from adjacent text or other fences—update the sections that include
Eventually, PollCluster/PollNodePool,
PollClusterAdapterStatuses/PollNodePoolAdapterStatuses, PollClusterHTTPStatus,
and PollNamespacesByPrefix examples to have a blank line above and below each
fenced code block.
---
Outside diff comments:
In `@e2e/cluster/concurrent_creation.go`:
- Around line 154-169: The AfterEach cleanup currently calls
Expect(cleanupErrors).To(BeEmpty()) which fails the test on cleanup errors;
change it to only log warnings instead. In the ginkgo.AfterEach closure that
iterates clusterIDs and collects cleanupErrors (the block containing
h.CleanupTestCluster and cleanupErrors slice), remove the Expect(...) assertion
and add a conditional that, when len(cleanupErrors) > 0, writes a warning via
ginkgo.GinkgoWriter.Printf (or similar) summarizing the number of failed
cleanups and their error details; ensure clusterID and cleanupErrors are used in
the message so failures are logged but do not fail the test.
In `@e2e/cluster/creation.go`:
- Around line 375-384: Replace the failing assertion in the AfterEach cleanup
with a warning log: where the code currently calls
Expect(err).NotTo(HaveOccurred()) after calling h.CleanupTestCluster(ctx,
clusterID), remove the Expect and instead log a warning that includes the error
and the clusterID (for example via ginkgo.GinkgoWriter.Printf or the test
helper's logger). Keep the existing guard (if h == nil || clusterID == "") and
the call to h.CleanupTestCluster; only change the error handling so cleanup
errors are logged as warnings (including err and clusterID) and do not cause the
test to fail.
---
Duplicate comments:
In `@pkg/helper/matchers.go`:
- Around line 169-184: extractResourceConditions can panic by dereferencing
v.Status when Status is nil; update the function to guard v.Status for both case
branches (when handling *openapi.Cluster and *openapi.NodePool) by returning
nil, nil if v == nil OR v.Status == nil before accessing v.Status.Conditions.
Modify the branches in extractResourceConditions so they check v.Status == nil
and only return v.Status.Conditions when Status is non-nil; keep the default
error return unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Central YAML (base), Organization UI (inherited)
Review profile: ASSERTIVE
Plan: Enterprise
Run ID: 6b15cd84-e900-4ec2-8407-86144419a7a6
📒 Files selected for processing (31)
CLAUDE.mddocs/architecture.mddocs/development.mde2e/cluster/concurrent_creation.goe2e/cluster/creation.goe2e/cluster/delete.goe2e/cluster/update.goe2e/nodepool/concurrent_creation.goe2e/nodepool/creation.goe2e/nodepool/delete.goe2e/nodepool/update.gopkg/client/client.gopkg/client/cluster.gopkg/client/constants.gopkg/client/nodepool.gopkg/helper/helper.gopkg/helper/matchers.gopkg/helper/pollers.gopkg/helper/validation.gopkg/helper/wait.gotest-design/testcases/delete-cluster.mdtest-design/testcases/delete-nodepool.mdtest-design/testcases/update-cluster.mdtest-design/testcases/update-nodepool.mdtestdata/adapter-configs/cl-deployment/adapter-task-config.yamltestdata/adapter-configs/cl-job/adapter-task-config.yamltestdata/adapter-configs/cl-maestro/adapter-task-config.yamltestdata/adapter-configs/cl-namespace/adapter-task-config.yamltestdata/adapter-configs/np-configmap/adapter-task-config.yamltestdata/payloads/clusters/cluster-patch.jsontestdata/payloads/nodepools/nodepool-patch.json
💤 Files with no reviewable changes (2)
- pkg/helper/helper.go
- pkg/helper/wait.go
|
/lgtm |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: rh-amarin The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
3f47690
into
openshift-hyperfleet:main
Summary
Test Plan
make test-allpassesmake lintpassesmake test-helm(if applicable)