-
Notifications
You must be signed in to change notification settings - Fork 2
/
Copy pathnotes.txt
31 lines (17 loc) · 6.43 KB
/
notes.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
TITLE: Voxel Tracer
This project is the conception of a voxel-based lightmapper built right in unity.
============ PREFACE: How It Started ============
The inital conception from this project actually came aobut from a roadblock I ran into while building my Baked Volumetrics fog package in Unity. For some brief context my Baked Volumetrics is a fog package that recreates the look of high quality fog, for very cheap by baking the color of the fog into a 3D texture. It is basically lightmapping but for fog, however the look of the fog depends entirely on this 3D texture. The roadblock was that the only way to generate good quality 3D textures was to have a very dense and even light probe setup in the scene. The light probes had to be setup and created in the scene, then you would have to run the unity lightmapper in order to fill those probes with lighting data.
While this did work, this had some problems...
1. The quality of the fog 3D texture was dependent on how dense the existing light probe setup was. With fairly low resolution fog textures you could get away with it, however if you wanted anything higher you were at the mercy of your light probe setups and often light probe setups are intentionally very sparse. Too many light probes means too much data to compute and hold.
2. Longer setup/iteration times. While I did create a quick tool where you could generate a light probe group that matches the resolution of the volume, you still had to generate it and then run the unity lightmapper which lightmaps the ENTIRE scene. Depending on the scene and it's complexity that is alot of extra time just to re-lightmap the scene just so you could get the light probe data to generate the fog volume.
3. Light probe setups have to be done a certain way to get good results. However with most levels and games, light probes are not uniformly placed throught the entirety of a space because having too many is inefficent. You only place probes where you need it, so say you had a city level and you had lightprobes covering only a couple of meters of the street level, which is where most dynamic objects will be. The problem with that is that there is no probe data anywhere higher in the level where there are buidlings with big light sources that affects the underlying level below. You would pretty much effectively get "blindspots" in the volume, and areas where the fog color does not match the underlying light that it's supposed to be scattering.
So in order to solve all of these issues, I would essentially have to build my own independent lightmapper for the fog. So that is exactly what I did!
NOTE: I do keep the original light probe sampling method for fog color, since it does still work and is acceptable in some circumstances.
============ PREFACE: Lighting Methods ============
Now which route to go to in regards to building a lightmapper? I explored a couple of different options before I eventually stumbled upon what I have here now. Mind you I go into detail for each of these methods in the baked volumetrics documentation.
To keep it brief I explored a few methods of essentially calculating scene lighting...
1. IBL Captures: The idea is that we assume that the scene already has the underlying lighting, all we need to do is recapture it and factor it into the fog color. This essentially worked by capturing a low quality 360 panoramic cubemap of the scene at every sample point of the volume. While this did work and produce intresting results, ultimately it did not scale well with higher resolutions. In addition also the IBL captures do not capture any incoming light, only light that is coming off of surfaces in the scene. So if say at a given sample point, that sample point was in line of sight with any light sources it would not recieve the direct light from that light source.
2. CPU Lightmapper: This was a step in the right direction. The idea was that the underlying scene already has geometry, so using mesh colliders we can do a simple Physics.Raycast against the geometry and perform the needed calculations for direct light, just like a real raytracer. I was able to calculate essentially the direct light terms from any light sources, but trying to do any other complex light calculations to ultimately achieve the look I wanted would make this version wayyyy slower than it already is. Not to mention also with the current setup I had it had a large number of problems that would need solving but would add to the compute time. Different solutions did come to mind in regards to attempting to speed up the CPU lightmapper however I decided that trying to handle this on the GPU would be better.
The CPU Lightmapper was a good introduction into some basic raytracing that I knew was necessary for this new "lightmapper" I was going to build. I did consider doing triangle raytracing, just like the CPU lightmapper was already doing with the Physics.Raycast method raycasting against triangle collision meshes. However I decided against it as it was too complicated for me to understand, but in addition also I knew that the precision of this lightmapper for the fog did not need to be 100%. Also and importantly for me, I was fully aware that unity did have methods for leveraging hardware on Nivida RTX cards to do hardware accelerated triangle raytracing, but I decided against this. The biggest reason was that I wanted to create a system that could be used on almost any other kind of hardware, and not limit users who didn't have it. From my background in the early days learning about graphics in unity on an old iMac, it absolutely sucked when I was learning about these new fancy graphic effects that I couldn't actually run because the hardware did not support DX11 features.
To get back on track, over the years of learning and studying graphics I've learned about various methods that were employed to achieve realtime global illumination, and one solution that often came up in regards to efficency was to voxelize the scene, and trace against that instead. It's much easier to trace against a cube, and since meshes often can have thousands or millions of triangles, tracing against a set of cubes that approximated the scene geometry fares much better performance wise. Precison wise it obviously does have its set of issues, but again remember that the precison needed for the final result is not necessary, and not to mention also these realtime global illumination methods that were voxel based did prove that in general they were precise enough to provide pretty accurate lighting results.