You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi Greggmen, while reading this paragraph in the article (https://webgpufundamentals.org/webgpu/lessons/webgpu-fundamentals.html), I am contemplating the potential utilization of multiple render targets for my specific scenario. Currently, I am engaged in 2D video rendering and have encountered situations where multiple canvases are created on the same display. Therefore, I would greatly appreciate it if you could share your experiences or provide any valuable insights regarding this topic.
The text was updated successfully, but these errors were encountered:
I'm sorry I never responded. i'm not sure what you're asking for. Rendering to multiple canvases (covered here), is not the same as rendering to multiple targets.
To render to multiple targets is no different than rendering to 1 target. Just specify the target more than 1 target.
In the render pipeline descriptor add the formats of the targets
You could call context.getCurrentTexture() from 2 different canvases if the canvases are the same size. It's uncommon to use canvases with multiple targets though. More common is various rendering techniques like "deferred rendering"
This article will explain the concept. It's not a WebGPU article but the example linked above shows how to do the same thing in WebGPU
Assign the textures as colorAttachments in your render pass descriptor
constrenderPassDescriptor={colorAttachments: [{view: texture0.createView(),// first target, @location(0) in the fragment shader)clearValue: [0.3,0.3,0.3,1],loadOp: 'clear',storeOp: 'store',},{view: texture1.createView(),// second target, @location(1) is the fragment shaderclearValue: [0.3,0.3,0.3,1],loadOp: 'clear',storeOp: 'store',},],};
Make your fragment shader output to each location where @location(n) is target index n.
structMyFragmentShaderOutputs {
@location(0) target0: vec4f,
@location(1) target1: vec4f,
}
@fragmentfnfs() ->MyFragmentShaderOutputs {
varo: MyFragmentShaderOutputs;
o.target0 =vec4f(1, 0, 0, 1); // put red in target0o.target1 =vec4f(0, 0, 1, 1); // put blue in target1returno;
}
Hi Greggmen, while reading this paragraph in the article (https://webgpufundamentals.org/webgpu/lessons/webgpu-fundamentals.html), I am contemplating the potential utilization of multiple render targets for my specific scenario. Currently, I am engaged in 2D video rendering and have encountered situations where multiple canvases are created on the same display. Therefore, I would greatly appreciate it if you could share your experiences or provide any valuable insights regarding this topic.
The text was updated successfully, but these errors were encountered: