Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could you share some best practices for multiple render targets? #85

Open
sucong0826 opened this issue Jan 11, 2024 · 1 comment
Open

Comments

@sucong0826
Copy link

image

Hi Greggmen, while reading this paragraph in the article (https://webgpufundamentals.org/webgpu/lessons/webgpu-fundamentals.html), I am contemplating the potential utilization of multiple render targets for my specific scenario. Currently, I am engaged in 2D video rendering and have encountered situations where multiple canvases are created on the same display. Therefore, I would greatly appreciate it if you could share your experiences or provide any valuable insights regarding this topic.

@greggman
Copy link
Collaborator

greggman commented Apr 12, 2024

I'm sorry I never responded. i'm not sure what you're asking for. Rendering to multiple canvases (covered here), is not the same as rendering to multiple targets.

To render to multiple targets is no different than rendering to 1 target. Just specify the target more than 1 target.

  1. In the render pipeline descriptor add the formats of the targets
const pipeline = device.createRenderPipeline({
  label: 'our hardcoded red triangle pipeline',
  layout: 'auto',
  vertex: {
    module,
    entryPoint: 'vs',
  },
  fragment: {
    module,
    entryPoint: 'fs',
    targets: [
      { format: 'rgba8unorm' },   // target 0
      { format: 'rgba8unorm' },   // target 1
    ],
  },
});
  1. Create or get a texture for each target.

    NOTE: They must be the same size

    You could call context.getCurrentTexture() from 2 different canvases if the canvases are the same size. It's uncommon to use canvases with multiple targets though. More common is various rendering techniques like "deferred rendering"

    This article will explain the concept. It's not a WebGPU article but the example linked above shows how to do the same thing in WebGPU

  2. Assign the textures as colorAttachments in your render pass descriptor

const renderPassDescriptor = {
  colorAttachments: [
    {
      view: texture0.createView(),         // first target, @location(0) in the fragment shader)
      clearValue: [0.3, 0.3, 0.3, 1],
      loadOp: 'clear',
      storeOp: 'store',
    },
    {
      view: texture1.createView(),         // second target, @location(1) is the fragment shader
      clearValue: [0.3, 0.3, 0.3, 1],
      loadOp: 'clear',
      storeOp: 'store',
    },
  ],
};  
  1. Make your fragment shader output to each location where @location(n) is target index n.
    struct MyFragmentShaderOutputs {
      @location(0) target0: vec4f,
      @location(1) target1: vec4f,
    }
    @fragment fn fs() -> MyFragmentShaderOutputs {
      var o: MyFragmentShaderOutputs;
      o.target0 = vec4f(1, 0, 0, 1);  // put red in target0
      o.target1 = vec4f(0, 0, 1, 1);  // put blue in target1
      return o;
    }

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants