Sunday, December 18, 2011

Back to Russian Roulette

I just realized I could've applied the Russian Roulette optimization to the new photon scattering method as well. The fundamental difference between the literature photon scattering method (from Per Christensen's paper) and the Georgia Tech method is the photon that is being stored. The former stores the incoming photon, while the latter stores the outgoing photon. This is why the latter allows us to compute the direct illumination as well as the indirect illumination.

There are improvements. First is performance. I am able to cast more initial photons and end up with fewer photons in total. The results also have more subtle and pleasant diffuse reflections compared to before. (Notice the nice blue diffuse reflection below.)

Here's the result. 120 initial photons, 2 bounces, ~200 photons total.


Reflection and Refraction

The performance took a huge hit. It now takes about 4 times as long to compile, and my laptop is about to burst into flames as it's running. Getting about 9 FPS. So the scene is a bit dark because I'm only using 50 photons.

One other minor improvement I did was to also add specular highlight as an extra step after photon gathering (similar to what I did with shadow). This shows up as the white hotspot on the glass sphere.

Results looks nice though. Here it is!

Saturday, December 17, 2011

Graphics is all about Tricks!

Added really cheap "fake" shadows during photon gathering. Here's what I did.

Basically, after photon gathering is done and I have the final color value, I trace a shadow ray to the light source. If it's blocked, I multiply the gathered color value by a fractional constant, making it dimmer. That's it! This can also be easily extended to soft shadows, but it might be more expensive (and that'll encroaching on the area of monte carlo path tracing.)

Looks pretty convincing as far as I can tell, and this is much cheaper than the blending between full out ray tracing and photon gathering that I had earlier.


Per Geometry Caching (Comparison)

When comparing per geometry caching with previous results, at 200 photons, I notice a few things:
  1. Performance is roughly the same, with ~9 FPS on my laptop.
  2. Per Geometry method has a much longer shader compile time, which makes sense because we now have all those if statements.
  3. Per Geometry method generally produces much nicer can cleaner results. Results from before look a little "dirty" because it seems as if there are paint splats everywhere.
  4. Per Geometry method also has much sharper edges because the colors cannot bleed over.
The following are results. 100 initial photons, 1 bounce = 200 photons total. The first image is without per geometry caching.



Per Geometry Caching (Completed)

Continuing the work from the previous post. I completed the rendering side of per geometry caching. Unfortunately, I wasn't able to come up with a better way than one mentioned in the last post, with brute-force if statements per geometry. As a result, there wasn't any speed increase that was expected. Hopefully I can find a better way to do this and increase performance that we were supposed to get.

The results look better as expected, with no unintentional color bleeding from surface to surface. Here is a screenshot of it. 100 photons, no bounces (which means there should be no diffuse color bleeding anywhere.)


Saturday, December 10, 2011

Per Geometry Caching (In Progress)

Started working on implementing per geometry photon caching. This is basically storing photons at each geometry instead of at a global scene level. This will help with performance during the gathering step, since we will now only have to look at photons on the geometry, and not photons in the whole scene. It will also eliminate unintended color bleeding since only photons on the geometry will effect the color of that geometry.

I reasoned that if I create a new photon list for each geometry in the scene to store its photons, it will be quite inefficient, since I will need three of these lists and each list is actually a texture at the GLSL level. So I decided to tackle problem by keeping my global photon list, but partition them so that each part correspond to a geometry (e.g., photon0-18 are on geometry0, photons19-42 are on geometry1, and so on.)

I am able to do this successfully, and visualize it (since the data is passed down as a texture.)


The first row is color, second is position, and third is angles. It is clear that data is partitioned. For example, take a look at the top row representing color. The first section is all yellow; this corresponds to geometry0 which is a yellow wall, so its photon colors must be yellow. Next is the opposite blue wall, then there are 4 other gray walls/floors, and finally the three shapes in the scene, colored red, green, and purple respectively.

The next step is to do a look up in the shader. In a regular programming language, this would be trivial, but GLSL ES does not allow variable array indexing, or variable loop lengths. This is my current problem: I have to figure out how to efficiently do a photon lookup within those partitions in GLSL.

I was going to do:

// idx - index of geometry
for (int i=PHOTON_START[idx]; i<PHOTON_END[idx]; i++) {
       // gather photons, compute color
}

However, const arrays are not allowed in GLSL ES. So my next alternative (quite nasty Javascript hack) is to have Javascript hard-code multiple IF statements:

if (idx == 0) {
       for (int i=PHOTON_START_01; i<PHOTON_END_01; i++) {
       // compute color...
       }
}
if (idx == 1) {
       for (int i=PHOTON_START_02; i<PHOTON_END_02; i++) {
       // compute color...
       }
}
...

I'm not sure if there's a smarter way to do this. This would be quite a few IF statements, one per geometry, and it would not scale well. And if the hardware would visit every inner IF statements anyway (something I'd have to figure out), then this would not improve performance at all (plus it would make it worse because of all the conditionals.)

Friday, December 9, 2011

Refraction...

Got refraction working at real time. Doesn't look quite realistic without surface reflections though...

Also, apparently, this runs on Chrome now. No idea why it works.

(80 photons, no bounces)


Gaussian Looks Nice!

Since the week was the last week of school, I wasn't able to do that much, but the one thing I accomplished was to implement Gaussian filtering

Gaussian filtering is used for photon gathering. Needed to tweak a few operations and constants here and there before I could get the image to look nice. Definitely much smoother than the cone filtering that I had before.

Here's the image! (100 photons, no bounces)


Friday, December 2, 2011

Beta Review and Future Plan

Today I had my Beta Review with Joe and Norm, and I'd say it went pretty well. Norm was generally satisfied with the results, mainly because I am able to produce a decent image at real-time with relatively very few photons, despite that various corners that has been cut and many "cheats" that I did on my side of things.

There were a few good points that came out of the conversation:
  • Gaussian filtering might produce better results when doing photon gathering.
  • Caching photons per surface will produce sharper corners, eliminate unintended bleeding, and speed up photon search.
  • I should look into an acceleration structure now that I have random access using textures.
  • I should talk to Joe about how to optimize branches on GLSL.
  • I should look into Arvo's paper on caching rays on 5D space (???)
With the conversation I had this morning, and this list of features, I feel that I am in a pretty good shape to move forward and finish up a few things that I have left. Here is a list of tasks for me from now to the end of the semester

Must:
  • Gaussian fall-off
  • Complete global illumination with mirror and transparent materials
  • Per surface photon caching (at least investigate)
  • Some kind of acceleration structure (at least investigate)
Optional:
  • Interactive user interface, allows user to modify scene
With the above list of tasks, I have organized them into the remaining timeline as follow:

Dec 4 - 10
  • Gaussian fall-off for photon gathering
  • Photon caching per surface
Dec 11 - 17
  • Acceleration structure
  • Reflection and refraction
  • User interactivity
Dec 13 - 21
  • Final user interface work
  • Movie presentation

Self Evaluation

We're at the point of Beta Review, and it's time to do a self evaluation of how the project has been coming along.

Looking back at what I set as the original goals, they were quite reasonable. After all, my project is only to implement global illumination with photon mapping - pretty standard stuff. However, there are still many features that I had left out and not implemented. Why? I think that the one aspect of this project that is particularly lacking was the initial research and planning of the framework I will be using. Most of the time spent on the project was not on the global illumination algorithm, but debugging and compromising for the nature of the framework. More specifically, there were many limitations of GLSL ES that I did not foresee that made many standard features of a scene graph or photon mapping very tricky and difficult to implement. With more thorough research and thinking, and planning through every step of the project, I believe the project would have gone much more smoothly. Nonetheless, I managed to make certain compromises and find various alternative methods along the way that allowed me to build some kind of global illumination renderer on WebGL.

The following is a list of features I have today:
  • Scene list
  • Ray tracing, with reflections and refractions
  • Photon scattering
  • Rendering with photon gathering (with only full reflection or refraction)
  • Real-time interactive user-controlled camera
And here are features that were compromised:
  • Scene graph - instead using a simple scene list
  • Photon scattering and rendering - not using standard literature technique
 Finally, here is a list of future tasks that might or might not be implemented:
  • Interactive user interface - allow users to modify the scene
  • Irradiance Caching
  • Caustics
So, in a way I have met my main goals, but along the way I had to cut many corners. Looking ahead to the next few weeks, I see myself working on those features that weren't properly implemented or not implemented well, and trying to see if it is possible to make them fully functional and more optimized. I also plan to add more interactive functionality by allowing users to modify the scene. Finally, I am afraid to say that I won't be getting around to my future tasks, specifically irradiance caching and caustics, both of which are not well supported by my current framework.

After the Beta Review tomorrow, I will post more details about the future timeline of the next few weeks.

Thursday, December 1, 2011

Lookin' Not Bad!

Improvements:
  1. Figured out I can dynamically load textures using a built-in class from Three.js (could've saved me >6 hours if I had figured this out earlier.)
  2. Loaded all photon values (position, color, and incident angles) through textures and for look up on shader. Turns out precision loss isn't a big problem.
  3. Played with photon gathering equation. Increased radius, and dimmed the light contribution.
With all photon values as textures, we can now do a random look up and are no longer limited by the limitations of arrays. The following result is using 200 initial photons (no bounces, just to the diffuse surface) and it's not looking too bad!


And here's an experiment on mirrored surface. 100 photons only (performance dies at 200).