Skip to content

GPU Web 2023 07 26

Corentin Wallez edited this page Aug 1, 2023 · 1 revision

GPU Web 2023-07-26

Note that unless stated otherwise this is a GPU for the Web community group meeting and not a working group meeting.

Chair: Corentin

Scribe: Ken

Location: Google Meet

Tentative agenda

  • Administrivia
    • Upcoming F2F?
    • Making a video update of 2-3m for TPAC?
  • CTS Update
  • “Tacit Resolution” Queue
    • Add HTMLImageElement and ImageData to copyExternalImageToTexture #3430
  • Short update on building the spec #4241
  • Only one pipeline layout for each entry point can be given as compilation hint #4233
  • Unify GPUQueue.writeTexture() and GPUQueue.copyExternalImageToTexture() #4206
  • Add rgb10a2uint texture format #4245
  • (stretch) Extended brightness range rendering #4239
  • Agenda for next meeting

Attendance

  • Apple
    • Mike Wyrzykowski
    • Myles C. Maxfield
  • Google
    • Brandon Jones
    • Christopher Cameron
    • Corentin Wallez
    • Kai Ninomiya
    • Ken Russell
    • Shannon Woods
  • Microsoft
    • Rafael Cintron
  • Mozilla
    • Kelsey Gilbert

Administrivia

  • Upcoming F2F?
    • We're all implementing stuff - nice to socialize, but maybe don't need a F2F right now, since we're all implementing. How about early next year?
    • MM: past few months have been pretty light with standardization. Can imagine transition out of current lull.
    • CW: depends on when WebKit/Firefox are caught up with the current spec. From WebKit repo it seems there's a lot of work still needed for full WebGPU impl. Oct/Nov might still be early for a F2F.
    • MM: don't let WebKit hold the group back.
    • KG: I'd like to make our impl right. Not WebKit holding us back. May be a time when we're not all caught up to the spec, but we're also comfor table moving forward. Vague, but will we be hungry to work on more spec stuff, and usefully?
    • KG: think December would be soon enough but it's usually lost for business. Jan/Feb?
    • KR: my vote would be for the beginning of the new year - may take vacation in Dec.
  • Making a video update of 2-3m for TPAC?

CTS Update

  • KN: shader tests have been ongoing. Compat tests also ongoing. Expect to hear more about them soon. Brandon found some tests that were slow and fixed them. Creating the same pipeline too many times.
  • BJ: specific problem - layout "auto" with pipeline, our internal caching scheme failed. Auto layouts between pipelines weren't compatible.
  • MM: test chunking?
    • KN: sorry, no progress

“Tacit Resolution” Queue

  • Add HTMLImageElement and ImageData to copyExternalImageToTexture #3430
    • KN: blocked on us. Talked with Gregg a bit. All good, we'll go ahead.
    • Side conversation about colorspace conversions. Bit orthogonal to the issue.

Short update on building the spec #4241

  • KN: spec build was pretty flaky. Oguz took a look, and it's now no longer flaky. It builds inside a Docker container. This comes with dev container; can develop either locally, or in a container hosted by Github.
  • Added a document describing how to use this. Can click a button, it'll create you a container and editing environment. Check it out!
  • Old build process still works.

Only one pipeline layout for each entry point can be given as compilation hint #4233

  • MM: external report (not CG member), I think. Asking how to use the hints.
  • MM: Most had answers, one didn't. They use the same fragment shader for multiple vertex shaders, but with different pipelines. With WebGPU you have a pipeline layout shared. When I specify the hint for precompilation for VS/FS, it will be used in many different pipelines with different pipeline layouts. What do I do to let it do the needed precompilation?
  • MM: right now you can't, have to call createShaderModule many times - don't want that.
  • MM: they described a request, not a proposal. I provided a proposal. Inside the hints, for every entry point, you can specify one pipeline layout. Change IDL so you can specify a sequence of pipeline layouts
  • KG: why not tell them to just make more shader modules?
  • MM: then need to re-parse the source. Maybe you'd have an internal cache. Reasonable counterproposal. I still prefer the one where we change the API, because it's more clear to authors. Also, if we get 2 pipeline layouts for the same entry point, and (unlikely) the two layouts don't conflict, might want to compile that shader with the union of the two pipelines, so we only have to compile it once.
  • KN: I think we should do it. If it works for you…
  • CW: why multiple layouts instead of multiple hints? In the future, if the dictionary grows further - won't we want to specify multiple of these as well?
  • MM: you're right - that's a better proposal than mine.
  • CW: otherwise, sounds good.

Unify GPUQueue.writeTexture() and GPUQueue.copyExternalImageToTexture() #4206

Add rgb10a2uint texture format #4245

  • KN: 2 q's. 1) procedurally, how do we want to add this to the spec? (answered)
  • KN: 2) should this be feature-detectable, since chrome's shipped without it? Find a way to let applications know this texture format's available? Or, small enough case and it'll go away soon, so let devs try it and catch the exception?
  • KG: feature detection's straightforward. It's an enum, we'll throw.
  • KN: yes, just extra texture you didn't need.
  • CW: you can try creating an error texture inside an ErrorScope.
  • KN: fine with me. Don't think ppl will feature detect this much, if at all.
  • CW: let's add it. Q about Metal. We can resolve it offline.

(stretch) Extended brightness range rendering #4239

  • CW: this is HDR stuff. ccameron@ here to discuss and present.
  • (presentation by Chris, link?)
  • MM: depending on the exact details, can get to the state you're suggesting.
  • CC: yes, do this unless this other thing is specified.
  • KG: irony, YUV would save us from this.
  • Clip luminance but not chrominance.
  • CC: need to be more precise about it. It's in the primaries of the display device. Unobservable to JS.
  • CW: what do we need to add to the WebGPU spec? Always clamp luminance boolean, and a way to turn that off?
  • MM: spec already says, clamp the luminance. Chris suggests an opt-out.
  • CC: also tighten up the language around luminance.
  • KG: you just want this in terms of the output display. Need to draw myself diagrams.
  • CC: HDR is the same as SDR, it just goes higher.
  • KN: details aren't clear to me. That's why I wrote the spec the way it is - clamp the luminance but not the chrominance.
  • CC: it's a good intuition.
  • KG: want test vectors. Given this input, this is what we expect. Usually - we'll agree on the right words, but later, didn't agree on what they meant. If we agree on the numbers, it's simpler.
  • MM: right now i can put 1,000,000 in one of the color components. No display can show this. What do you put on the screen? Hard question. Not prepared to resolve this now. Just wish that the group would stay away from that.
  • CW: +1 on test vectors. If clamping is done on primaries of display - can't define that. Trying to stay away from further discussion - e.g. way out of range - wonder if canvas spec could add predefined extra range behavior, same way there's predefined color space. Just use that.
  • MM: Chris's proposed at least two of those. :) Conversations ongoing.
  • KN: whatever the behavior is, given the numbers - want that same behavior across canvases, images, maybe videos?, CSS if they add this feature. Scope of it will be on this group. Must be consistent.
  • CC: agree. Also don't want someone sending 1,000,000 for all color channels. Things become ill-conditioned numerically. You change by 0.1% and it's a completely different color. Maybe tighten up a bit, but not that much.
  • CC: CSS working group's talking about HDR. First thing - how do we turn it off/down. :)
  • CW: sounds like general consensus - enum for clamping vs. not. What's the next step?
  • MM: add a field to GPUCanvasConfiguration, clamp vs. don't.
  • CC: I'm in favor of that. In its own metadata structure? Potential for more metadata? Something like CAMetalLayer API.
  • CW: eventually consistent with other canvas rendering contexts, CSS?
  • CC: yes. There's an explainer.
  • CW: we ready to accept a PR of this form into the WebGPU spec?
  • KN: this works for float16. Can define #s > 1. We're planning to add RGB10 for wide color. Approved, not in spec yet. How would we be able to use 10-bit here? Need more metadata, "here's what 1.0 (1023) means"?
  • CC: probably need to use one of srgb and display-p3. Need to agree about where 1.0 is. People want metadata - can look at CAEDRMetaata on Mac.
  • MM: 10-bit linear - don't think it's a good fit for HDR.
  • CC: no, don't want that. But, if you have HLG or PQ encoded space - we'll get to rec2100-hlg and pq once we pay down technical debt on the spec. Those are good at using 10 bits to reach large brightness range. PQ - usually goes up to 50x SDR white.
  • MM: few weeks ago, proposed idea that when you sample from 10-bit texture, you get values > 1.0 and < 0.0. Rejected by the group because it wasn't portable.
  • CC: XR10? I'm not familiar with the HW details. You're in PQ buffer - that goes to compositing - you never sample a texture and have it colorspace converted for you.
  • MM: right now, have formats where you sample and get the right thing. No such thing for the PQ curves. Have to do correct math. Not a dealbreaker.
  • CC: probably will have to support multiple tuning knobs. I don't see PQ sampling happening.
  • CW: talk offline about external texture. Start wrapping up?
  • Propose:
    • Add canvas color metadata with allow-extended or similar, to the HTML spec next to PredefinedColorSpace.
    • Reference that in GPUCanvasConfiguration, and tighten up the graphic Chris presented.
  • MM: this group doesn't have jurisdiction over HTML spec. The name of the enum isn't typed by web authors, so could be put into WebGPU spec and moved later.
  • KG: want this more fleshed out.
  • CC: going back, what's a test vector?
  • KG: we're saying, map this input to this output. Verifies the algorithm in the spec, and have shared examples of certain behavior. Want numbers.
  • CC: makes sense. Can write something, might be long. Will have all the math.
  • KN: aren't the expected results based on what the display has?
  • KG: yes, it's based on what the display does.
  • KN: we're writing the algorithm then.
  • KG: it can be hard for people to wrap their heads around the algorithm. An explainer defining what luminance=1.1 means is important.
  • CW:
    • Do everything in WebGPU spec at first
    • Explainer to describe what the algorithm does
  • CC: sounds good. Can give examples with numbers. And per Windows / Mac / Android APIs.
  • MM: most of what you're about to write is implemented in Core Animation. Ideally, would be compatible with what CA does today, and flexible enough that if CA makes changes in the future, it won't break the web.

Agenda for next meeting

  • Pacific time!
    • Asked APAC attendees for items, none yet
    • Will send these if we get some
    • Otherwise, if we don't get items from them…hold the meeting, or cancel?
    • MM: if no agenda items, cancel. :)
    • CW: will check with the APAC folks.
  • For the next Atlantic time meetings:
    • HDR (see if we have explainer, etc.)
    • Compat
      • Google would like to present about this, will send explainer earlier
    • PING at some point?
      • Should invite some PING members to this group to discuss #3101
Clone this wiki locally