Skip to content

Conversation

@sunag
Copy link
Collaborator

@sunag sunag commented Dec 8, 2025

Related issue: #32461 (comment)

Description

Rename .getColorBufferType() -> .getOutputBufferType().
I also made the update related to canvas buffer setup #31893 (comment)

@sunag sunag added this to the r182 milestone Dec 8, 2025
@github-actions
Copy link

github-actions bot commented Dec 8, 2025

📦 Bundle size

Full ESM build, minified and gzipped.

Before After Diff
WebGL 350.9
83.23
350.9
83.23
+0 B
+0 B
WebGPU 616.15
171.03
616.35
171.09
+198 B
+60 B
WebGPU Nodes 614.75
170.76
614.95
170.83
+198 B
+61 B

🌳 Bundle size after tree-shaking

Minimal build including a renderer, camera, empty scene, and dependencies.

Before After Diff
WebGL 482.96
118.03
482.96
118.03
+0 B
+0 B
WebGPU 686.81
186.58
687.01
186.64
+199 B
+56 B
WebGPU Nodes 636.65
173.8
636.85
173.85
+199 B
+56 B

Comment on lines +225 to +231
let bufferType = parameters.outputBufferType;

if ( bufferType === undefined ) {

bufferType = parameters.outputType;

}
Copy link
Collaborator

@donmccurdy donmccurdy Dec 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, I think we're going to create problems having outputBufferType override outputType. It must be possible to specify a higher-precision format for the frame buffer used before tone mapping, without necessarily having a higher-precision canvas format / drawing buffer.

I am a bit worried by how similar the "output" names are, too, but I'll put that comment in #32461.

Copy link
Collaborator Author

@sunag sunag Dec 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

outputType would not be essential for defining the output buffer; if the user does not define outputBufferType, it will use the most appropriate one for output rendering. I think this is still consistent with the observation above.

Possible scenarios:

// canvas buffer 16bpc + 16bpc screen output target
const renderer = new THREE.WebGPURenderer( {
	outputType: THREE.HalfFloatType,
	outputBufferType: THREE.HalfFloatType
} );

// canvas buffer 16bpc + 8bpc screen output target
const renderer = new THREE.WebGPURenderer( {
	outputType: THREE.UnsignedByteType,
	outputBufferType: THREE.HalfFloatType
} );

// canvas buffer 8bpc + 16bpc screen output target
const renderer = new THREE.WebGPURenderer( {
	outputType: THREE.HalfFloatType,
	outputBufferType: THREE.UnsignedByteType
} );

// canvas buffer 8bpc + 8bpc screen output target
const renderer = new THREE.WebGPURenderer( {
	outputType: THREE.UnsignedByteType,
	outputBufferType: THREE.UnsignedByteType
} );

// reason for this conditional, most appropriate selection if not defined outputBufferType

// canvas buffer 16bpc + 16bpc output target
const renderer = new THREE.WebGPURenderer( {
	outputType: THREE.HalfFloatType
} );

// canvas buffer 8bpc + 8bpc output target
const renderer = new THREE.WebGPURenderer( {
	outputType: THREE.UnsignedByteType,
} );

Copy link
Collaborator

@donmccurdy donmccurdy Dec 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the case I was thinking of:

// working buffer 16bpc + 8bpc screen output target
const renderer = new THREE.WebGPURenderer( {
	outputType: THREE.UnsignedByteType,
	outputBufferType: THREE.HalfFloatType
} );

From a glance at the code it looks like outputType would be overridden by outputBufferType to HalfFloatType when configuring the WebGPU canvas context. I haven't tested this locally yet, though, and will try to do that.

EDIT: Tested and confirmed the issue.

Copy link
Collaborator

@Mugen87 Mugen87 Dec 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about reverting the changes to getPreferredCanvasFormat() until there is an agreement on how to proceed?

I must also say the naming outputType and outputBufferType is a bit confusing to me. When outputType is mainly about the canvas, maybe canvasType is more clear?

@sunag sunag marked this pull request as ready for review December 8, 2025 04:58
@sunag sunag merged commit e49cd3f into mrdoob:dev Dec 8, 2025
10 checks passed
@sunag sunag deleted the dev-output-type branch December 8, 2025 14:51
@donmccurdy
Copy link
Collaborator

donmccurdy commented Dec 8, 2025

@Mugen87 @sunag Please see the comment at #32501 (comment). The renaming part of the PR is OK with me, but overriding the canvas format to outputBufferType will create both a performance and functional regression. The format of the canvas / drawing buffer is unrelated to the format of the frame buffer, and there's no reason that a higher bit depth in the frame buffer should force a higher bit depth in the canvas.

For example I've tested in webgpu_postprocessing_3dlut.html. When outputBufferType is given any value — even the default, HalfFloatType — the canvas format is forced to higher precision unnecessarily. I do not think this can be released as-is.

@sunag
Copy link
Collaborator Author

sunag commented Dec 8, 2025

I don't see anything different in that. As you can see, the canvas texture is 8bpc.

Perhaps you are confusing it with? #27880

image

@donmccurdy
Copy link
Collaborator

donmccurdy commented Dec 8, 2025

I'm setting a breakpoint here ...

context.configure( {
device: this.device,
format: this.utils.getPreferredCanvasFormat(),
usage: GPUTextureUsage.RENDER_ATTACHMENT | GPUTextureUsage.COPY_SRC,
alphaMode: alphaMode,
toneMapping: {
mode: toneMappingMode
}
} );

... and seeing the canvas context configured to "rgba16float" when outputBufferType is assigned to HalfFloatType, overriding outputType. Visually the result is fine (as expected), but now spending more memory bandwidth than necessary I believe. The result of getPreferredCanvasFormat() is intended for the canvas context and may not be the right choice for the internal framebuffer configuration.

@sunag
Copy link
Collaborator Author

sunag commented Dec 8, 2025

Apparently we have a related but different problem. The internals RenderTargets shouldn't share the same outputBufferType property, which is generating this ambiguity regarding property responsibility and consequently confusion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants