DrawingCanvas API: Replace imperative extension methods with stateful canvas-based drawing model#377
DrawingCanvas API: Replace imperative extension methods with stateful canvas-based drawing model#377JimBobSquarePants wants to merge 244 commits intomainfrom
Conversation
We memoise the result of I just ran some experiments using Using
Not using
|
I agree that high-concurrency service throughput is worth understanding, but I don't think it should automatically define the default optimization target for ImageSharp.Drawing. I don't see this library primarily as a web-server drawing component in the same way some ImageSharp image-processing workloads are. The scenarios I have in mind are things like CAD-style rendering, charts and graphs, UI generation, tooling, and other programmatic rendering workloads where single-render performance and overall rendering capability matter more than maximizing throughput under heavily concurrent request load. That makes the current single-request results directly relevant, and it also means I'm comfortable with I did explore whether the library could self-throttle under concurrent load without user tuning. The honest answer is that any in-library mechanism needs either a shared scheduler across requests or a runtime pool-pressure signal, neither of which exists in a form we can consume without significant upstream changes. The |
CPU rasterizer parallel performance
Scenarios that involve immediate, on-screen rendering are better addressed by the WebGPU renderer. Whatever is the current primary use-case of That said, I just ran the same benchmarks with V2 rasterizer, and this PR is significantly improving every aspect perf-wise (🚀), regardless if parallelism is used or not, so I'm no longer pursuing to figure this out in the PR.
I don't know if self-throttling is the only possible answer here, I would recommend to open a tracking issue once this is merged. I'll move on reviewing other aspects now. |
antonfirsov
left a comment
There was a problem hiding this comment.
Plenty of notes although the review is still incomplete. Most importantly, there seems to be a bug in the GPU renderer, see the comment on the lines demo.
| texture = flushContext.Api.DeviceCreateTexture(flushContext.Device, in textureDescriptor); | ||
| if (texture is null) | ||
| { | ||
| error = "Failed to create WebGPU composition texture."; |
There was a problem hiding this comment.
Nit: passing around these error strings is odd, it would be cleaner to emit these messages to some sort of IErrorLogger.
| toolbar.Controls.Add(this.CreateRunButton("1k", 1_000)); | ||
| toolbar.Controls.Add(this.CreateRunButton("10k", 10_000)); | ||
| toolbar.Controls.Add(this.CreateRunButton("100k", 100_000)); | ||
| toolbar.Controls.Add(this.CreateRunButton("200k", 200_000)); |
There was a problem hiding this comment.
I actually wanted to make this 1M, but that makes the implementation hit buffer limits:
WebGPU (Failed: The staged-scene path tiles buffer requires 141001808 bytes, exceeding the current WebGPU binding limit of 134217728 bytes.)
Definitely not a show stopper, but some users may be limited by this, so it's worth noting down.
There was a problem hiding this comment.
It might be possible to work around this. There are hard limits to what you can send though.
There was a problem hiding this comment.
Workaround is good, but in long term I would view it as an optimization problem. The buffers are likely heavier than needed.
| canvas.Flush(); | ||
| } | ||
|
|
||
| stopwatch.Stop(); |
There was a problem hiding this comment.
Yeah, at 200K I'm seeing about 4x. That's annoying. I'll see what I can do.
antonfirsov
left a comment
There was a problem hiding this comment.
(Mostly) high-level notes on IDrawingCanvas & DrawingCanvas<TPixel>.
|
|
||
| // Defensive guard: built-in backends should provide either direct readback (CPU/backed surface) | ||
| // or shadow fallback. If this fails, it indicates a backend implementation bug or an unsupported custom backend. | ||
| if (!this.TryCreateProcessSourceImage(sourceRect, out Image<TPixel>? sourceImage)) |
There was a problem hiding this comment.
Apply methods are very expensive on GPU backends in comparison to GPU accelerated drawing operations. Since the IImageProcessingContext interface doesn't make it obvious there is an Image<T> backed by CPU memory, this can be even surprising. Is there any reasonable scenario where user would be willing to pay the high price? If not, I would say it makes sense to always throw a NotSupportedException on GPU.
There was a problem hiding this comment.
The cost is more bounded than it looks. Apply on GPU scissors to the path's AABB, not the full frame:
- Flush queued GPU work so the read sees the correct backdrop.
backend.TryReadRegion(...)reads only the path's bounding rectangle.Image<TPixel>.Mutate(operation)runs the processor on that sub-image.- Result is wrapped as an
ImageBrushand composited via the normal GPU fill path.
A 100×100 blur on a 1920×1080 canvas is 10k pixels readback, not 2M. The real cost is the processor itself, which is what the user asked for.
This is the bridge to the entire ImageSharp.Processing surface: blur, convolutions, color correction, quantization, dither, pixelate, resize, tone curves, edge detection, user-authored IImageProcessors. None exist as GPU-native primitives and most never would/could.
If we throw on GPU, the user wanting a blurred backdrop has to read the region back manually (usually a larger rect than the AABB), run the processor, build an ImageBrush, draw it. Same four steps, moved into every caller, with worse scissoring.
There was a problem hiding this comment.
The cost is more bounded than it looks
Have you benchmarked drawing a scene with Apply involved + same without apply?
A 100×100 blur on a 1920×1080 canvas is 10k pixels readback, not 2M.
I expect the readback to imply a significant synchronization cost regardless of the amount of data. Note that it is implemented via sync-over-async to keep the API synchronous:
the processor itself, which is what the user asked for.
For a newcomer who wants to use the WebGPU pipeline and didn't use ImageSharp before, it might not be obvious from the IImageProcessingContext API shape that processors are running on the CPU.
There was a problem hiding this comment.
Even as a relative newcomer to GPU programming I would expect the synchronization cost. Something must be copied.
I do not think an "Apply vs no Apply" benchmark would answer this cleanly. That would measure the selected processor, the readback, and the final composition together not the readback cost unless we use an artificial/no-op processor, and then the result is not especially representative of real usage.
The important implementation choice here is that the readback is scoped to the affected bounds rather than the full canvas.
I can make the synchronous readback behavior explicit in the supplementary docs, but I do not think the benchmark suggested would prove much beyond the fact that this operation crosses the GPU/CPU boundary.
There was a problem hiding this comment.
I'm suggesting the benchmark mostly because you seem to be skeptical that the cost of the readback would be significant. I think it's possible to design it to be informative, e.g. via parametrization.
I'm mostly viewing this API from long-term perspective: even if Apply is relatively less painful in the current pipeline, in a future, highly optimized version of ImageSharp.Drawing.WebGPU the readback+CPU processor cost would be very noticeable. Users who want to render complex scenes with high FPS should avoid Apply.
can make the synchronous readback behavior explicit in the supplementary docs
Yes please! This would address my concern. I would even add a note in the API docs, for example:
<remarks>
On a GPU backend this is implemented using readback and an image processor running on the CPU.
</remarks>There was a problem hiding this comment.
A couple of random observations and follow-ups to previous discussions. The GH review UI is weird and shows comments added during my review to previous discussions two times: in those discussions and here in form of "new" comments.
Will move on to WebGPUHostedWindow<T> tomorrow.
|
|
||
| // Defensive guard: built-in backends should provide either direct readback (CPU/backed surface) | ||
| // or shadow fallback. If this fails, it indicates a backend implementation bug or an unsupported custom backend. | ||
| if (!this.TryCreateProcessSourceImage(sourceRect, out Image<TPixel>? sourceImage)) |
There was a problem hiding this comment.
The cost is more bounded than it looks
Have you benchmarked drawing a scene with Apply involved + same without apply?
A 100×100 blur on a 1920×1080 canvas is 10k pixels readback, not 2M.
I expect the readback to imply a significant synchronization cost regardless of the amount of data. Note that it is implemented via sync-over-async to keep the API synchronous:
the processor itself, which is what the user asked for.
For a newcomer who wants to use the WebGPU pipeline and didn't use ImageSharp before, it might not be obvious from the IImageProcessingContext API shape that processors are running on the CPU.
| /// A drawing canvas over a frame target. | ||
| /// </summary> | ||
| /// <typeparam name="TPixel">The pixel format.</typeparam> | ||
| public sealed partial class DrawingCanvas<TPixel> : IDrawingCanvas |
There was a problem hiding this comment.
Is there a good reason to publicly expose the DrawingCanvas<T> type instead of just having a factory methods returning IDrawingCanvas instances? (Which are already present in DrawingCanvasExtensions)
There was a problem hiding this comment.
Yes. DrawingCanvas<TPixel> is the concrete typed canvas, and I want that to remain part of the public surface.
IDrawingCanvas is useful as the common operation contract, but it is not a replacement for the concrete type. The concrete type carries the pixel type, exposes typed construction over Buffer2DRegion<TPixel> / ICanvasFrame<TPixel>, and returns typed child canvases from CreateRegion. That matters for callers building canvases over non-Image<TPixel> targets, including backend integrations.
The factory methods are convenience entry points for Image<TPixel> / ImageFrame<TPixel>. They should return the concrete type for the same reason ImageSharp APIs generally return concrete typed objects when the pixel type is known. Callers who want abstraction can still store it as IDrawingCanvas, but forcing the factory methods to return IDrawingCanvas would throw away useful type information without simplifying the implementation.
There was a problem hiding this comment.
I looked into this even deeper, and now I strongly recommend to give this another thought, because seeing it with fresh eyes, to me it looks like the pixel type can be simplified away from most APIs, leading to smaller API surface end elimination of runtime costs from generic-instantiation. The pixel type information is only useful if there are use-cases that won't work with pixel-agnostic canvas. I'm not convinced this is the case.
They should return the concrete type for the same reason ImageSharp APIs generally return concrete typed objects when the pixel type is known
Image<TPixel> has pixel specific API-s, most importantly ProcessPixelRows where the consumer needs to know the pixel type. There are no such use cases for the canvas where the pixel type is relevant for the consumer.
exposes typed construction over
Buffer2DRegion<TPixel>/ICanvasFrame<TPixel>, and returns typed child canvases fromCreateRegion
The canvas returned by those methods can be as well pixel-agnostic.
That matters for callers building canvases over non-
Image<TPixel>targets, including backend integrations.
From what I see, all dependent types backend types (WebGPUDeviceContext, WebGPURenderTarget, WebGPUWindowFrame, WebGPUHostedWindow ...) can be made pixel-agnostic with pixel-specific factory methods here and there to do the TPixel -> CompositePixelRegistration mapping.
Do you have a specific example that would prove me wrong?
| /// <summary> | ||
| /// Convenience shape helpers that build paths and forward to the core <see cref="IDrawingCanvas"/> fill and draw primitives. | ||
| /// </summary> | ||
| public static class DrawingCanvasShapeExtensions |
There was a problem hiding this comment.
What decides if a method should go here or to IDrawingCanvas? As pointed out in another comment, DrawingCanvas<T> : IDrawingCanvas has a couple of delegating methods that don't use it's internal state directly.
There was a problem hiding this comment.
I moved these to extensions to address the earlier symmetry feedback, and this is the boundary I want here.
IDrawingCanvas contains the core canvas contract: state/layers, region creation, core fill/draw/apply/text/image operations, and flushing. DrawingCanvasShapeExtensions contains shape convenience overloads that only build paths or iterate path collections before forwarding to the core Fill/Draw methods.
These methods do not define new canvas behavior, so they should stay as extensions rather than expanding the interface.
There was a problem hiding this comment.
IDrawingCanvascontains the core canvas contract
It also contains many methods that are implemented trivially via delegation: SaveLayer(GraphicsOptions layerOptions), SaveLayer(GraphicsOptions layerOptions, Rectangle bounds), Fill(Brush brush), potentially Clear(Brush brush). Do you consider that core functionality? Woldn't it make sense to move those to extension methods too?
antonfirsov
left a comment
There was a problem hiding this comment.
Comments on high level API shape & behaviors. Follow-up on the discussion on pixel specific API-s.
| public sealed class NativeSurface | ||
| { | ||
| private readonly ConcurrentDictionary<Type, object> capabilities = new(); |
There was a problem hiding this comment.
From what I see, this will only contain a single WebGPUSurfaceCapability in practice. Is it realistic that a backend would need to store multiple "capabilities" given - just like WebGPUSurfaceCapability - it can aggregate all necessary information in a single type?
If not the pattern could be simplified:
public interface INativeSurface { }
class WebGPUSurface : INativeSurface
{
private readonly WebGPUDeviceHandle deviceHandle;
private readonly WebGPUQueueHandle queueHandle;
private readonly WebGPUTextureHandle targetTextureHandle;
private readonly WebGPUTextureViewHandle targetTextureViewHandle;
}| /// <summary> | ||
| /// Gets pixel format information for this destination surface. | ||
| /// </summary> | ||
| public PixelTypeInfo PixelType { get; } |
There was a problem hiding this comment.
This is unused. Is it really needed?
| /// <remarks> | ||
| /// The backing WebGPU device, queue, texture, and texture view must remain valid while canvases target this surface. | ||
| /// </remarks> | ||
| public sealed class WebGPUSurfaceCapability |
There was a problem hiding this comment.
Does this need to be public? Do we expect a user directly interact with this type in practice?
| /// A drawing canvas over a frame target. | ||
| /// </summary> | ||
| /// <typeparam name="TPixel">The pixel format.</typeparam> | ||
| public sealed partial class DrawingCanvas<TPixel> : IDrawingCanvas |
There was a problem hiding this comment.
I looked into this even deeper, and now I strongly recommend to give this another thought, because seeing it with fresh eyes, to me it looks like the pixel type can be simplified away from most APIs, leading to smaller API surface end elimination of runtime costs from generic-instantiation. The pixel type information is only useful if there are use-cases that won't work with pixel-agnostic canvas. I'm not convinced this is the case.
They should return the concrete type for the same reason ImageSharp APIs generally return concrete typed objects when the pixel type is known
Image<TPixel> has pixel specific API-s, most importantly ProcessPixelRows where the consumer needs to know the pixel type. There are no such use cases for the canvas where the pixel type is relevant for the consumer.
exposes typed construction over
Buffer2DRegion<TPixel>/ICanvasFrame<TPixel>, and returns typed child canvases fromCreateRegion
The canvas returned by those methods can be as well pixel-agnostic.
That matters for callers building canvases over non-
Image<TPixel>targets, including backend integrations.
From what I see, all dependent types backend types (WebGPUDeviceContext, WebGPURenderTarget, WebGPUWindowFrame, WebGPUHostedWindow ...) can be made pixel-agnostic with pixel-specific factory methods here and there to do the TPixel -> CompositePixelRegistration mapping.
Do you have a specific example that would prove me wrong?
| public bool TryGetCpuRegion(out Buffer2DRegion<TPixel> region); | ||
|
|
||
| /// <summary> | ||
| /// Attempts to get an opaque native destination surface. | ||
| /// </summary> | ||
| /// <param name="surface">The native surface when available.</param> | ||
| /// <returns><see langword="true"/> when a native surface is available.</returns> | ||
| public bool TryGetNativeSurface([NotNullWhen(true)] out NativeSurface? surface); |
There was a problem hiding this comment.
There is an exclusive or relationship between TryGetCpuRegion and TryGetNativeSurface in implementing types. ICanvasFrame<TPixel> seems to be the only abstraction that makes the presence of TPixel viral through all APIs.
If instead DrawingCanvas<TPixel> would contain either a Buffer2DRegion<TPixel> or a NativeSurface, it seems to me that consumer types could be made pixel-agnostic.
The functionality provided by CanvasRegionFrame<TPixel> can be implemented by other means.
| this.DiagnosticLastSceneFailure = exceedsBindingLimit | ||
| ? error ?? "The staged WebGPU scene exceeded the current binding limits." | ||
| : error ?? "Failed to create the staged WebGPU scene."; | ||
| this.FlushCompositionsFallback(configuration, target, compositionScene, compositionBounds: null); |
There was a problem hiding this comment.
The fallback mechanism should be optional (IMO rather opt-in than opt-out). For users who want rendering to be fast, the WebGPU failure is an error scenario.
When fallbacks happen, the reason should be observable by the user, which points towards having a public logging mechanism instead of a test-only DiagnosticLastSceneFailure.
| _ = sender; | ||
| _ = e; |
There was a problem hiding this comment.
Nit: if possible, I would rather suppress analyzers reporting errors/warnings here than have these NOP assignments.
| /// <summary> | ||
| /// Convenience shape helpers that build paths and forward to the core <see cref="IDrawingCanvas"/> fill and draw primitives. | ||
| /// </summary> | ||
| public static class DrawingCanvasShapeExtensions |
There was a problem hiding this comment.
IDrawingCanvascontains the core canvas contract
It also contains many methods that are implemented trivially via delegation: SaveLayer(GraphicsOptions layerOptions), SaveLayer(GraphicsOptions layerOptions, Rectangle bounds), Fill(Brush brush), potentially Clear(Brush brush). Do you consider that core functionality? Woldn't it make sense to move those to extension methods too?

Prerequisites
Breaking Changes: DrawingCanvas API
Fix #106
Fix #244
Fix #344
Fix #367
This is a major breaking change. The library's public API has been completely redesigned around a canvas-based drawing model, replacing the previous collection of imperative extension methods.
What changed
The old API surface — dozens of
IImageProcessingContextextension methods likeDrawLine(),DrawPolygon(),FillPolygon(),DrawBeziers(),DrawImage(),DrawText(), etc. — has been removed entirely. These methods were individually simple but suffered from several architectural limitations:The new model:
DrawingCanvasAll drawing now goes through
IDrawingCanvas/DrawingCanvas<TPixel>, a stateful canvas that queues draw commands and flushes them as a batch.Via
Image.Mutate()(most common)Standalone usage (without
Image.Mutate)DrawingCanvas<TPixel>can be created directly from an image or frame using theCreateCanvas(...)extensions:Canvas state management
The canvas supports a save/restore stack (similar to HTML Canvas or SkCanvas):
State includes
DrawingOptions(graphics options, shape options, transform) and clip paths.SaveLayercreates an offscreen layer that composites back onRestore.IDrawingBackend— bring your own rendererThe library's rasterization and composition pipeline is abstracted behind
IDrawingBackend. This interface has the following methods:FlushCompositions<TPixel>TryReadRegion<TPixel>Process()andDrawImage()).The library ships with
DefaultDrawingBackend(CPU, tiled fixed-point rasterizer). An experimental WebGPU compute-shader backend (ImageSharp.Drawing.WebGPU) is also available, demonstrating how alternate backends plug in. Users can provide their own implementations — for example, GPU-accelerated backends, SVG emitters, or recording/replay layers.Backends are registered on
Configuration:Migration guide
ctx.Fill(color, path)ctx.ProcessWithCanvas(c => c.Fill(Brushes.Solid(color), path))ctx.Fill(brush, path)ctx.ProcessWithCanvas(c => c.Fill(brush, path))ctx.Draw(pen, path)ctx.ProcessWithCanvas(c => c.Draw(pen, path))ctx.DrawLine(pen, points)ctx.ProcessWithCanvas(c => c.DrawLine(pen, points))ctx.DrawPolygon(pen, points)ctx.ProcessWithCanvas(c => c.Draw(pen, new Polygon(new LinearLineSegment(points))))ctx.FillPolygon(brush, points)ctx.ProcessWithCanvas(c => c.Fill(brush, new Polygon(new LinearLineSegment(points))))ctx.DrawText(text, font, color, origin)ctx.ProcessWithCanvas(c => c.DrawText(new RichTextOptions(font) { Origin = origin }, text, Brushes.Solid(color), null))ctx.DrawImage(overlay, opacity)ctx.ProcessWithCanvas(c => c.DrawImage(overlay, sourceRect, destRect))ProcessWithCanvasblock — commands are batched and flushed togetherOther breaking changes in this PR
AntialiasSubpixelDepthremoved — The rasterizer now uses a fixed 256-step (8-bit) subpixel depth. The oldAntialiasSubpixelDepthproperty (default: 16) controlled how many vertical subpixel steps the rasterizer used per pixel row. The new fixed-point scanline rasterizer integrates area/cover analytically per cell rather than sampling at discrete subpixel rows, so the "depth" is a property of the coordinate precision (24.8 fixed-point), not a tunable sample count. 256 steps gives ~0.4% coverage granularity — more than sufficient for all practical use cases. The old default of 16 (~6.25% granularity) could produce visible banding on gentle slopes.GraphicsOptions.Antialias— now controlsRasterizationMode(antialiased vs aliased). Whenfalse, coverage is snapped to binary usingAntialiasThreshold.GraphicsOptions.AntialiasThreshold— new property (0–1, default 0.5) controlling the coverage cutoff in aliased mode. Pixels with coverage at or above this value become fully opaque; pixels below are discarded.Benchmarks
All benchmarks run under the following environment.
DrawPolygonAll - Renders a 7200x4800px path of the state of Mississippi with a 2px stroke.
FillParis - Renders 1096x1060px scene containing 50K fill paths.