You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Transmissive objects create a new render target for each render pass of a scene.
When rendering a WebXR scene in a headset, there is a pass for the left eye and one for the right eye.
Looking at the output with the oculus profiling tool, you will see something like this:
This shows that the scene is rendered to a 64 bit buffer (1), then everything is drawn to the left eye (2).
After that, the scene is again rendered to a 64 bit buffer (3), and drawn to the right eye (4).
(Note that this trace is running with the EXT_multisampled_render_to_texture extension for easier reading. Regular multisampling looks more complicated but will have a similar timing.)
Surface 1 is rendering as usual.
Surface 2 is the texture provided by WebXR. Because there's a switch to another surface at the end, the GPU has to save the color and depth information so it flushes it out to main memory (= StoreColor : 0.446ms, StoreDepthStencil : 0.833ms)
Surface 3 is again rendering as usual.
Surface 4 is the same texture as surface 2. The GPU now has to load the color and depth data from main memory (= LoadColor : 0.237ms, LoadDepthStencil : 0.534ms) before it can continue rendering.
This flushing is causing a 2 ms overhead per frame. If the session is running at 90fps, this would waste almost 20% of your frame budget.
This flush can be avoided by rendering surface 1 and 3 before using them in the scene. I made a small change to WebGLRenderer.js to do so and got the following result:
As you can see, surface 1 and 2 render each eye to a buffer and surface 3 (which is the webxr framebuffer) no longer has to store/load depth or load color.
The text was updated successfully, but these errors were encountered:
Transmissive objects create a new render target for each render pass of a scene.
When rendering a WebXR scene in a headset, there is a pass for the left eye and one for the right eye.
Looking at the output with the oculus profiling tool, you will see something like this:
This shows that the scene is rendered to a 64 bit buffer (1), then everything is drawn to the left eye (2).
After that, the scene is again rendered to a 64 bit buffer (3), and drawn to the right eye (4).
(Note that this trace is running with the EXT_multisampled_render_to_texture extension for easier reading. Regular multisampling looks more complicated but will have a similar timing.)
This flushing is causing a 2 ms overhead per frame. If the session is running at 90fps, this would waste almost 20% of your frame budget.
This flush can be avoided by rendering surface 1 and 3 before using them in the scene. I made a small change to WebGLRenderer.js to do so and got the following result:
As you can see, surface 1 and 2 render each eye to a buffer and surface 3 (which is the webxr framebuffer) no longer has to store/load depth or load color.
The text was updated successfully, but these errors were encountered: