diff --git a/TSG/Storage/HowTo-Storage-AddPhysicalDisksToS2DPool.md b/TSG/Storage/HowTo-Storage-AddPhysicalDisksToS2DPool.md
new file mode 100644
index 0000000..0a71d97
--- /dev/null
+++ b/TSG/Storage/HowTo-Storage-AddPhysicalDisksToS2DPool.md
@@ -0,0 +1,347 @@
+# How to add physical disks to an existing Azure Local cluster
+
+
+
+ | Component |
+ Storage |
+
+
+ | Topic |
+ Storage Spaces Direct: Add physical disks to an existing pool for online capacity expansion |
+
+
+ | Applicable Scenarios |
+ Day 2 Operations: Capacity expansion / Add disk |
+
+
+
+## Overview
+
+This guide describes the safe sequence for adding physical disks to an existing Azure Local cluster that uses Storage Spaces Direct (S2D).
+
+The intended path is online capacity expansion, but a disk add is still a storage-infrastructure change. Complete the health checks first, add disks symmetrically, and wait for storage jobs to complete before expanding volumes or performing more maintenance.
+
+## What and Why
+
+### What This Guide Covers
+
+The end-to-end procedure to add OEM-supported physical disks to an existing S2D storage pool on a healthy Azure Local cluster, including the pre-checks, the insert-and-claim flow, monitoring redistribution jobs, and final validation.
+
+### When to Use This Guide
+
+Use this guide when:
+
+- The cluster is healthy.
+- New OEM-supported disks are being added for capacity expansion.
+- The goal is to add the disks to the existing Storage Spaces Direct pool.
+
+Do **not** use this guide for:
+
+- Replacing failed disks.
+- Adding cluster nodes.
+- Recovering from an unhealthy pool.
+- Adding unsupported disk models or firmware.
+- Changing cache/capacity tier design.
+
+## Prerequisites
+
+Before inserting disks, confirm all of the following:
+
+- All cluster nodes are up.
+- No active health faults.
+- No active storage jobs.
+- The pool and virtual disks are healthy.
+- The disk model and firmware are supported by the system vendor.
+- The same number and type of disks will be added to each node (drive symmetry).
+- Backups are current.
+
+> [!IMPORTANT]
+> Do not add disks while repair, rebuild, regeneration, rebalance, or optimization jobs are active. Adding disks under load can amplify rebuild work and extend the impact window.
+
+## Table of Contents
+
+- [Overview](#overview)
+- [What and Why](#what-and-why)
+- [Prerequisites](#prerequisites)
+- [Pre-check Commands](#pre-check-commands)
+- [Add Disks](#add-disks)
+- [Monitor Redistribution and Storage Jobs](#monitor-redistribution-and-storage-jobs)
+- [Confirm Added Capacity](#confirm-added-capacity)
+- [Expand Volumes](#expand-volumes)
+- [Verification](#verification)
+- [Troubleshooting](#troubleshooting)
+
+## Pre-check Commands
+
+Run all commands from an elevated PowerShell session on a cluster node.
+
+### Step 1: Check Cluster Health
+
+Confirm there are no active health faults and that all nodes are up before any storage change.
+
+```powershell
+# List active health faults across the cluster
+Get-HealthFault
+
+# Confirm every cluster node is in the Up state
+Get-ClusterNode | Sort-Object Name | Format-Table Name, State
+```
+
+Expected result:
+
+- No health faults returned.
+- All nodes show `Up`.
+
+### Step 2: Check Storage Health
+
+Confirm the pool, virtual disks, and storage jobs are in the expected state before adding new disks.
+
+```powershell
+# Pool health and capacity
+Get-StoragePool -IsPrimordial $false |
+ Format-Table FriendlyName, HealthStatus, OperationalStatus, Size, AllocatedSize
+
+# Virtual disk health and footprint
+Get-VirtualDisk |
+ Format-Table FriendlyName, HealthStatus, OperationalStatus, ProvisioningType, Size, FootprintOnPool
+
+# Active storage jobs (must be empty before proceeding)
+Get-StorageJob
+```
+
+Expected result:
+
+- Storage pool and virtual disks are healthy.
+- No active storage jobs are running.
+
+### Step 3: Capture Current Disk Inventory
+
+Take a baseline so the new disks can be identified by serial number after insertion.
+
+```powershell
+# Save the current physical disk inventory; compare after insertion
+Get-PhysicalDisk |
+ Sort-Object DeviceId |
+ Format-Table DeviceId, FriendlyName, SerialNumber, MediaType, BusType, Size, FirmwareVersion, HealthStatus, Usage, CanPool, CannotPoolReason
+```
+
+Save the output so it can be compared after the new disks are added.
+
+### Step 4: Check Disk Symmetry
+
+Each node should have the same count for each disk type used by the cluster.
+
+```powershell
+# Per-node disk counts grouped by media type; counts should match across nodes
+Get-StorageNode | ForEach-Object {
+ $node = $_
+ Get-PhysicalDisk -StorageNode $node |
+ Group-Object MediaType |
+ ForEach-Object {
+ [PSCustomObject]@{
+ Node = $node.Name
+ MediaType = $_.Name
+ Count = $_.Count
+ }
+ }
+} | Sort-Object Node, MediaType | Format-Table
+```
+
+> [!WARNING]
+> Adding disks asymmetrically (different counts per node) can strand capacity and reduce resiliency. Correct asymmetry before adding the new disks to the pool.
+
+## Add Disks
+
+### Step 1: Insert Disks Symmetrically
+
+Add the same number of supported disks to each node. Keep slot placement consistent across nodes if the hardware platform supports a consistent slot layout.
+
+> [!NOTE]
+> Do not reboot nodes as part of this procedure unless the hardware vendor explicitly requires it.
+
+### Step 2: Confirm Windows Sees the New Disks
+
+After insertion, wait a few minutes and re-run the inventory command.
+
+```powershell
+# Re-inventory; new serial numbers should appear
+Get-PhysicalDisk |
+ Sort-Object DeviceId |
+ Format-Table DeviceId, FriendlyName, SerialNumber, MediaType, BusType, Size, FirmwareVersion, HealthStatus, Usage, CanPool, CannotPoolReason
+```
+
+Expected result:
+
+- New disks are visible.
+- Disk model, firmware, media type, and size match the plan.
+- New disks show `CanPool=True`, `Verification in progress`, or `In a Pool`.
+
+> [!CAUTION]
+> If the disks do not appear, check hardware visibility (slot, cabling, vendor management UI) first. Do **not** run storage reset commands for disks that are not visible or not identified.
+
+### Step 3: Wait for Automatic Pooling
+
+Storage Spaces Direct normally claims eligible disks and adds them to the pool automatically.
+
+```powershell
+# Look for unclaimed eligible disks
+Get-PhysicalDisk -CanPool $true |
+ Format-Table DeviceId, FriendlyName, SerialNumber, MediaType, Size, FirmwareVersion
+```
+
+If this returns no rows and the new disks show `CannotPoolReason = In a Pool`, the disks were claimed automatically.
+
+### Step 4: Manually Add Disks Only When Needed
+
+Manual add is appropriate when automatic pooling does not claim eligible disks, the target pool is known, and the disks show `CanPool=True`.
+
+First, inspect the current pool and eligible disks:
+
+```powershell
+# Inspect the target pool and the eligible disks
+$pool = Get-StoragePool -IsPrimordial $false
+$eligibleDisks = Get-PhysicalDisk -CanPool $true
+
+$pool | Format-Table FriendlyName, HealthStatus, OperationalStatus
+$eligibleDisks | Format-Table DeviceId, FriendlyName, SerialNumber, MediaType, Size, FirmwareVersion
+```
+
+> [!IMPORTANT]
+> A healthy Storage Spaces Direct cluster has exactly one non-primordial pool. The snippet below enforces that and requires the operator to enumerate the intended new disks by serial number, so `Add-PhysicalDisk` cannot accidentally claim unintended `CanPool=True` disks.
+
+```powershell
+# Defensive: require exactly one non-primordial pool. Abort otherwise.
+$pool = Get-StoragePool -IsPrimordial $false
+if (@($pool).Count -ne 1) {
+ throw "Expected exactly one non-primordial pool. Found $(@($pool).Count). " +
+ "Select the target pool explicitly by FriendlyName before continuing."
+}
+
+# Operator MUST enumerate the intended new disks by serial number.
+# Do NOT pipe Get-PhysicalDisk -CanPool $true directly into Add-PhysicalDisk.
+$intendedSerials = @(
+ '',
+ ''
+)
+
+# Resolve serials to physical disk objects and confirm the count matches the intent.
+$disksToAdd = Get-PhysicalDisk -CanPool $true |
+ Where-Object SerialNumber -in $intendedSerials
+if ($disksToAdd.Count -ne $intendedSerials.Count) {
+ throw "Disk count mismatch: $($disksToAdd.Count) eligible disks matched " +
+ "the $($intendedSerials.Count) intended serial numbers. Resolve before continuing."
+}
+
+# Add only the explicitly identified disks to the target pool.
+Add-PhysicalDisk -StoragePoolFriendlyName $pool.FriendlyName -PhysicalDisks $disksToAdd
+```
+
+## Monitor Redistribution and Storage Jobs
+
+After disks are added, Storage Spaces Direct may run background jobs to optimize and redistribute data.
+
+```powershell
+# Track active storage jobs (rebuild, regeneration, optimize, rebalance)
+Get-StorageJob
+```
+
+While storage jobs are active, avoid:
+
+- Rebooting nodes.
+- Applying updates.
+- Putting nodes into maintenance mode.
+- Adding more disks.
+- Expanding volumes.
+- Cancelling storage jobs.
+
+> [!NOTE]
+> Storage jobs can run for hours or days depending on pool size, media type, and workload.
+
+## Confirm Added Capacity
+
+```powershell
+# Raw pool size should reflect the added disks
+Get-StoragePool -IsPrimordial $false |
+ Format-List FriendlyName, Size, AllocatedSize, HealthStatus, OperationalStatus
+```
+
+Usable capacity may not appear immediately if the system is restoring rebuild reserve or redistributing data.
+
+## Expand Volumes
+
+> [!IMPORTANT]
+> Hard gate: do not expand volumes while storage jobs are active.
+
+```powershell
+# Storage jobs MUST be empty before expansion
+Get-StorageJob
+```
+
+Expected result: no active jobs.
+
+```powershell
+# Decide whether expansion is required
+Get-VirtualDisk | Format-Table FriendlyName, ProvisioningType, Size, FootprintOnPool
+Get-Volume | Sort-Object SizeRemaining | Format-Table FileSystemLabel, HealthStatus, Size, SizeRemaining
+```
+
+For thin-provisioned volumes, manual expansion may not be needed immediately. If fixed volumes need expansion, use Windows Admin Center or the standard PowerShell resize flow.
+
+> [!WARNING]
+> Do not consume all pool capacity. Preserve operational free space and rebuild reserve.
+
+## Verification
+
+```powershell
+# Final cluster + storage health snapshot
+Get-HealthFault
+Get-ClusterNode | Sort-Object Name | Format-Table Name, State
+Get-StorageJob
+
+# Pool, virtual disk, and volume health
+Get-StoragePool -IsPrimordial $false |
+ Format-Table FriendlyName, HealthStatus, OperationalStatus, Size, AllocatedSize
+Get-VirtualDisk |
+ Format-Table FriendlyName, HealthStatus, OperationalStatus
+Get-Volume |
+ Format-Table FileSystemLabel, HealthStatus, Size, SizeRemaining
+
+# Final physical disk inventory; compare against the pre-check baseline
+Get-PhysicalDisk |
+ Sort-Object DeviceId |
+ Format-Table DeviceId, FriendlyName, SerialNumber, MediaType, Size, FirmwareVersion, HealthStatus, Usage, CanPool, CannotPoolReason
+```
+
+Expected result:
+
+- No health faults.
+- All nodes are up.
+- New disks are in the intended pool and healthy.
+- Pool and virtual disks are healthy.
+- No unexpected storage jobs remain.
+
+## Troubleshooting
+
+### Disks show `CanPool=False`
+
+**Symptoms:** New disks are visible to Windows but `Get-PhysicalDisk` reports `CanPool=False` and pool capacity did not increase.
+**Solution:** Follow the companion troubleshooting guide: [Troubleshoot - Physical disks not claimed after insertion (`CanPool=False`)](./Troubleshoot-Storage-PhysicalDiskCanPoolFalse.md).
+
+### Storage jobs run for an unexpectedly long time
+
+**Symptoms:** `Get-StorageJob` continues to show active rebuild, regeneration, or optimize jobs for a long period after the add.
+**Solution:** Long jobs are expected on large pools and HDD capacity tiers. Do not cancel storage jobs. Validate cluster health (`Get-HealthFault`, `Get-ClusterNode`) and let the jobs complete. Consult the storage diagnostics tool: [Troubleshooting Storage With Support Diagnostics Tool](./Troubleshooting-Storage-With-Support-Diagnostics-Tool.md).
+
+### Asymmetric disk distribution detected after insertion
+
+**Symptoms:** The per-node symmetry check shows different disk counts per node after the add.
+**Solution:** Stop further changes. Insert the missing disks on the asymmetric nodes to restore symmetry before any further pool changes or volume expansion.
+
+## References
+
+- [Adding servers or drives to Storage Spaces Direct](https://learn.microsoft.com/windows-server/storage/storage-spaces/add-nodes#adding-drives)
+- [Troubleshoot Storage Spaces and Storage Spaces Direct health and operational states](https://learn.microsoft.com/windows-server/storage/storage-spaces/storage-spaces-states)
+- [Drive symmetry considerations](https://learn.microsoft.com/azure/azure-local/concepts/drive-symmetry-considerations)
+- [Troubleshooting Storage With Support Diagnostics Tool](./Troubleshooting-Storage-With-Support-Diagnostics-Tool.md)
+
+---
diff --git a/TSG/Storage/README.md b/TSG/Storage/README.md
index fa5171b..b2298a9 100644
--- a/TSG/Storage/README.md
+++ b/TSG/Storage/README.md
@@ -1,3 +1,5 @@
# Storage
* [Troubleshooting Storage With Support Diagnostics Tool](./Troubleshooting-Storage-With-Support-Diagnostics-Tool.md)
+* [How To: Add physical disks to an existing Azure Local cluster](./HowTo-Storage-AddPhysicalDisksToS2DPool.md)
+* [Troubleshoot: Physical disks not claimed after insertion (`CanPool=False`)](./Troubleshoot-Storage-PhysicalDiskCanPoolFalse.md)
diff --git a/TSG/Storage/Troubleshoot-Storage-PhysicalDiskCanPoolFalse.md b/TSG/Storage/Troubleshoot-Storage-PhysicalDiskCanPoolFalse.md
new file mode 100644
index 0000000..eaa66ee
--- /dev/null
+++ b/TSG/Storage/Troubleshoot-Storage-PhysicalDiskCanPoolFalse.md
@@ -0,0 +1,319 @@
+# Troubleshoot physical disks not claimed after insertion (`CanPool=False`)
+
+
+
+ | Component |
+ Storage |
+
+
+ | Severity |
+ Medium |
+
+
+ | Applicable Scenarios |
+ Day 2 Operations: Capacity expansion / Add disk |
+
+
+ | Affected Versions |
+ All Azure Local releases (Storage Spaces Direct) |
+
+
+
+## Overview
+
+This guide helps troubleshoot new physical disks that are visible to Windows on an Azure Local cluster but are not added to the Storage Spaces Direct (S2D) pool.
+
+`CanPool=False` does not always mean something is broken. It can mean the disk is already in the pool, still being verified, blocked by hardware or firmware support checks, offline, not healthy, or carrying old metadata.
+
+## Symptoms
+
+**Observable behaviors:**
+
+- New disks were inserted into one or more cluster nodes.
+- The disks appear in PowerShell.
+- Storage pool capacity did not increase.
+- `Get-PhysicalDisk` shows one or more disks with `CanPool=False`.
+
+**Common error indicator:**
+
+```
+CanPool : False
+CannotPoolReason :
+```
+
+## Root Cause
+
+S2D will not claim a physical disk into its pool unless every gate passes: the disk must be healthy, online, supported by the solution vendor (model + firmware), have completed Health Service verification, and not already belong to another pool. The `CannotPoolReason` field on `Get-PhysicalDisk` reports which gate is failing.
+
+### Common `CannotPoolReason` Values
+
+| CannotPoolReason | Meaning | Action |
+|---|---|---|
+| `In a Pool` | The disk was already claimed by a storage pool | Confirm pool membership; no fix needed |
+| `Verification in progress` | Health Service is checking whether the disk and firmware are approved | Wait and recheck |
+| `Verification failed` | Health Service could not complete supportability verification | Check cluster health and vendor support data |
+| `Hardware not compliant` | The disk model is not approved by the solution vendor | Contact the hardware vendor |
+| `Firmware not compliant` | The disk firmware is not approved by the solution vendor | Contact the hardware vendor |
+| `Offline` | The disk is offline | Bring only the intended disk online |
+| `Insufficient Capacity` | The disk is too small | Replace with a supported disk |
+| `Removable media not supported` | The disk is removable or presented as removable | Replace with supported internal storage |
+| Stale metadata suspected | The disk has previous data or pool metadata | Reset only after confirming the disk is safe to wipe |
+
+## Resolution
+
+### Prerequisites
+
+- An elevated PowerShell session on a cluster node.
+- Confirmation that the cluster is healthy aside from this issue (no unrelated active rebuild, no node down).
+- Vendor support matrix for the disk model and firmware version on hand.
+
+### Steps
+
+#### Step 1: Check the Disk State
+
+Capture the full picture for every physical disk and identify which disks are blocked and why.
+
+```powershell
+# Get the full disk picture; CannotPoolReason tells you which gate failed
+Get-PhysicalDisk |
+ Sort-Object DeviceId |
+ Format-Table DeviceId, FriendlyName, SerialNumber, MediaType, BusType, Size, FirmwareVersion, HealthStatus, Usage, CanPool, CannotPoolReason
+```
+
+The most important field is `CannotPoolReason`. Use the value to pick the matching sub-step below.
+
+#### Step 2a: Resolve `In a Pool`
+
+This is usually expected after automatic pooling.
+
+```powershell
+# Replace with the new disk's serial number
+$serial = ''
+
+# Confirm the disk is in the intended (non-primordial) pool and healthy
+Get-StoragePool -IsPrimordial $false |
+ Get-PhysicalDisk |
+ Where-Object SerialNumber -eq $serial |
+ Format-Table DeviceId, FriendlyName, SerialNumber, Usage, HealthStatus, OperationalStatus
+```
+
+If the disk is in the intended pool and healthy, no additional action is needed.
+
+#### Step 2b: Resolve `Verification in progress`
+
+Wait several minutes and recheck:
+
+```powershell
+# Recheck verification progress
+Get-PhysicalDisk |
+ Format-Table DeviceId, FriendlyName, SerialNumber, CanPool, CannotPoolReason, HealthStatus
+```
+
+> [!WARNING]
+> Do not reset or manually add disks while verification is still in progress.
+
+#### Step 2c: Resolve `Verification failed`
+
+Check the cluster and storage state:
+
+```powershell
+# Cluster + storage health snapshot
+Get-HealthFault
+Get-ClusterNode | Format-Table Name, State
+Get-StorageJob
+Get-StoragePool -IsPrimordial $false | Format-Table FriendlyName, HealthStatus, OperationalStatus
+Get-VirtualDisk | Format-Table FriendlyName, HealthStatus, OperationalStatus
+```
+
+If available, run the Azure Local Support Diagnostic Tool storage checks:
+
+```powershell
+# Targeted disk and storage health checks via the Support Diagnostic Tool
+Start-AzsSupportStorageDiagnostic -Include 'DiskHealth','StorageHealth'
+```
+
+For details on these checks see [Troubleshooting Storage With Support Diagnostics Tool](./Troubleshooting-Storage-With-Support-Diagnostics-Tool.md).
+
+If the disk model or firmware is new to the system, validate supportability with the hardware vendor.
+
+#### Step 2d: Resolve `Hardware not compliant`
+
+The disk model is not approved for this solution.
+
+```powershell
+# Capture model + firmware so the vendor can confirm support
+Get-PhysicalDisk |
+ Format-Table DeviceId, FriendlyName, SerialNumber, FirmwareVersion, MediaType, BusType, CanPool, CannotPoolReason
+```
+
+> [!CAUTION]
+> Do not bypass hardware validation. Contact the hardware vendor for a supported disk model or an updated solution support package.
+
+#### Step 2e: Resolve `Firmware not compliant`
+
+The disk firmware is not approved for this solution.
+
+```powershell
+# Compare firmware on the new vs existing disks of the same model
+Get-PhysicalDisk |
+ Sort-Object FriendlyName, FirmwareVersion |
+ Format-Table DeviceId, FriendlyName, SerialNumber, FirmwareVersion, HealthStatus, CanPool, CannotPoolReason
+```
+
+Contact the hardware vendor for firmware alignment or updated support guidance.
+
+#### Step 2f: Resolve `Offline` or Read-Only Disk State
+
+```powershell
+# Identify the exact disk first
+Get-Disk | Sort-Object Number |
+ Format-Table Number, FriendlyName, SerialNumber, OperationalStatus, IsOffline, IsReadOnly, PartitionStyle
+```
+
+After the intended disk is confirmed:
+
+```powershell
+# Replace with the Number value from Get-Disk above
+Set-Disk -Number -IsOffline $false
+Set-Disk -Number -IsReadOnly $false
+```
+
+Recheck `Get-PhysicalDisk` afterward.
+
+#### Step 2g: Resolve Stale Metadata or Previous Pool Membership
+
+> [!WARNING]
+> `Reset-PhysicalDisk` is destructive. Do not run it on a disk that belongs to an active pool or contains data that must be preserved. Use this path **only** for disks that are intended to be wiped.
+
+Before reset, confirm the disk identity and that it is not in any pool:
+
+```powershell
+# Inspect the candidate disk
+Get-PhysicalDisk -UniqueId '' | Format-List *
+
+# Confirm it is NOT a member of any active pool (this should return nothing)
+Get-StoragePool -IsPrimordial $false |
+ Get-PhysicalDisk |
+ Where-Object UniqueId -eq '' |
+ Format-List *
+```
+
+Only if the disk is confirmed to be unused stale media and the data can be destroyed:
+
+```powershell
+# Destructive: clears storage pool metadata from the disk
+Reset-PhysicalDisk -UniqueId ''
+```
+
+Wait several minutes and recheck:
+
+```powershell
+# Verify the disk is now eligible
+Get-PhysicalDisk -UniqueId '' |
+ Format-Table DeviceId, FriendlyName, SerialNumber, CanPool, CannotPoolReason, HealthStatus, Usage
+```
+
+#### Step 3: Manual Add When Disks Are Eligible
+
+If the disks now show `CanPool=True` but are not automatically claimed, first inspect the current pool and eligible disks:
+
+```powershell
+# Inspect the target pool and eligible disks
+$pool = Get-StoragePool -IsPrimordial $false
+$eligibleDisks = Get-PhysicalDisk -CanPool $true
+
+$pool | Format-Table FriendlyName, HealthStatus, OperationalStatus
+$eligibleDisks | Format-Table DeviceId, FriendlyName, SerialNumber, MediaType, Size, FirmwareVersion
+```
+
+> [!IMPORTANT]
+> A healthy Storage Spaces Direct cluster has exactly one non-primordial pool. The snippet below enforces that and requires the operator to enumerate the intended new disks by serial number, so `Add-PhysicalDisk` cannot accidentally claim unintended `CanPool=True` disks.
+
+```powershell
+# Defensive: require exactly one non-primordial pool. Abort otherwise.
+$pool = Get-StoragePool -IsPrimordial $false
+if (@($pool).Count -ne 1) {
+ throw "Expected exactly one non-primordial pool. Found $(@($pool).Count). " +
+ "Select the target pool explicitly by FriendlyName before continuing."
+}
+
+# Operator MUST enumerate the intended new disks by serial number.
+# Do NOT pipe Get-PhysicalDisk -CanPool $true directly into Add-PhysicalDisk.
+$intendedSerials = @(
+ '',
+ ''
+)
+
+# Resolve serials to physical disk objects and confirm the count matches the intent.
+$disksToAdd = Get-PhysicalDisk -CanPool $true |
+ Where-Object SerialNumber -in $intendedSerials
+if ($disksToAdd.Count -ne $intendedSerials.Count) {
+ throw "Disk count mismatch: $($disksToAdd.Count) eligible disks matched " +
+ "the $($intendedSerials.Count) intended serial numbers. Resolve before continuing."
+}
+
+# Add only the explicitly identified disks to the target pool.
+Add-PhysicalDisk -StoragePoolFriendlyName $pool.FriendlyName -PhysicalDisks $disksToAdd
+```
+
+#### Step 4: Verify Resolution
+
+```powershell
+# Confirm the new disks are in the pool, healthy, with no surprise jobs or faults
+Get-PhysicalDisk | Sort-Object DeviceId |
+ Format-Table DeviceId, FriendlyName, SerialNumber, Usage, HealthStatus, CanPool, CannotPoolReason
+Get-StoragePool -IsPrimordial $false |
+ Format-Table FriendlyName, HealthStatus, OperationalStatus, Size, AllocatedSize
+Get-StorageJob
+Get-HealthFault
+```
+
+Expected result:
+
+- New disks are in the intended pool.
+- New disks are healthy.
+- No unintended `CanPool=True` disks remain.
+- No new storage faults are active.
+- Any expected storage jobs are progressing.
+
+## Prevention
+
+- Always validate disk model and firmware against the OEM solution support matrix **before** insertion.
+- Add disks symmetrically (same count and type per node) to avoid stranded capacity.
+- Run the [How to add physical disks to an existing Azure Local cluster](./HowTo-Storage-AddPhysicalDisksToS2DPool.md) pre-checks before any insertion.
+- Avoid reusing disks from prior deployments without confirming they are wiped of stale pool metadata.
+
+## Data to Collect Before Opening a Support Case
+
+```powershell
+# Cluster + storage state snapshot
+Get-HealthFault
+Get-ClusterNode | Format-Table Name, State
+Get-StorageJob
+Get-StoragePool -IsPrimordial $false | Format-List *
+Get-VirtualDisk | Format-List *
+Get-PhysicalDisk | Format-List *
+
+# Last 60 minutes of cluster log to C:\Temp
+Get-ClusterLog -Destination C:\Temp -TimeSpan 60
+```
+
+Also collect:
+
+- Disk serial numbers and unique IDs.
+- Node and slot mapping.
+- Disk model and firmware version.
+- Hardware vendor support matrix or written confirmation for the disk model and firmware.
+- Whether automatic pooling is expected or intentionally disabled.
+
+## Related Issues
+
+- [How to add physical disks to an existing Azure Local cluster](./HowTo-Storage-AddPhysicalDisksToS2DPool.md)
+- [Troubleshooting Storage With Support Diagnostics Tool](./Troubleshooting-Storage-With-Support-Diagnostics-Tool.md)
+
+## References
+
+- [Adding servers or drives to Storage Spaces Direct](https://learn.microsoft.com/windows-server/storage/storage-spaces/add-nodes#adding-drives)
+- [Troubleshoot Storage Spaces and Storage Spaces Direct health and operational states](https://learn.microsoft.com/windows-server/storage/storage-spaces/storage-spaces-states)
+
+---