-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[no sq] Texture/rendering fixes and improvements #15398
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the texture downloading code, tested with Sharpnet texture pack (because it has a sun_tonemap.png
)
video_driver = ogles2
-
works on "master" (ETCF_ALLOW_MEMORY_COPY true)
-
broken on "master" with ETCF_ALLOW_MEMORY_COPY false
nicebroken.mp4
(perhaps epilepsy warning or something)
-
works on this PR (also ETCF_ALLOW_MEMORY_COPY false)
works 👍
src/client/render/secondstage.cpp
Outdated
depth_format = video::ECF_D32; | ||
auto depth_format = video::ECF_D16; | ||
if (driver->queryTextureFormat(video::ECF_D24)) | ||
depth_format = video::ECF_D24; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For preferring a 24-bit depth buffer over a 32-bit buffer:
-
Do you have data how much it improves performance / have you observed a performance improvement?
-
Looking through the issue tracker, it seems there have been some issues with z-fighting in the past. Have you tested this to make sure there are no new problems?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have done no testing beyond checking that world rendering generally works. No performance data either.
This is all just going off "24-bit seems to be the default in GL so that should be enough for us".
If x2048 was still here we could ask him why he preferred a 32-bit depth buffer but 🤷
Looking through the issue tracker, it seems there have been some issues with z-fighting in the past
links please
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, looking into this further, the non-post-processing version of the pipeline has always used a 24-bit depth buffer, so this should be fine.
I don't see a noteworthy performance improvement on my phone, but I do see a noteworthy performance improvement when removing the ECF_A16B16G16R16F
branch so it uses ECF_A8R8G8B8
instead. The data I collected is not of very high quality though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thx for testing
I do see a noteworthy performance improvement when removing the ECF_A16B16G16R16F branch so it uses ECF_A8R8G8B8 instead
OpenGL has a 10-bit color format and I have wondered before if we should maybe try using it, in case it's faster than 16-bit.
Not exactly sure what the higher precision is for anyway. It might be so bloom, tonemapping and light effects can work correctly? (see: HDR in video)
and prefer it over D32 for our depth buffer, this can have performance benefits
followup to #15396
context for commit 2 is here:
To do
This PR is Ready for Review.
How to test
first commit:
other commits:
check that everything still works, in particular: