You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A follow up for this could be done in the future: #3132
Issues and possible solutions:
Internally requestDepthMap() is used to request depth texture, which enables the depth layer. The problem is that the depth layer is added to cameras by default, and so when multiple cameras are used, all of them render depth to texture.
This should be done similarly to how postprocessing is integrated to Layer Composition. When postprocessing is required for the camera, a property on CameraComponent is set. LayerComposition adds flag to RenderActions and callback executes from ForwardRenderer.
SceneDepth currently uses device.width and height for render targets. This is not very compatible with using depth as part of post processing on a camera that renders to a texture with different size. Solution for issue 1 would solve it by handling it per camera.
we could support option to render the the depth map before the camera renders, allowing SSAO to run ahead of time and be useable in main lighting pass. This would also prime the depth buffer on webgl2 platform to limit overdraw to gain some performance on heavy shaders.
The text was updated successfully, but these errors were encountered:
A follow up for this could be done in the future: #3132
Issues and possible solutions:
Internally requestDepthMap() is used to request depth texture, which enables the depth layer. The problem is that the depth layer is added to cameras by default, and so when multiple cameras are used, all of them render depth to texture.
This should be done similarly to how postprocessing is integrated to Layer Composition. When postprocessing is required for the camera, a property on CameraComponent is set. LayerComposition adds flag to RenderActions and callback executes from ForwardRenderer.
SceneDepth currently uses device.width and height for render targets. This is not very compatible with using depth as part of post processing on a camera that renders to a texture with different size. Solution for issue 1 would solve it by handling it per camera.
we could support option to render the the depth map before the camera renders, allowing SSAO to run ahead of time and be useable in main lighting pass. This would also prime the depth buffer on webgl2 platform to limit overdraw to gain some performance on heavy shaders.
The text was updated successfully, but these errors were encountered: