Sunday, November 27, 2011

Attempt to Optimize

Today I was trying to answer two questions:
  1. Is the limit real? Is there a limit on the number of operations or loops? What is the one breaking the code?
  2. Can I optimize the photon gathering code so that more photons can be used?
The answer to the first part is yes. There is indeed a limit on the number operations. Specifically, the operation that is making the shader fail is the number of array look-ups. With too many array look-ups, the shader refuses to compile. Therefore, the number of array look-ups needs to be decreased. Currently, array look-ups are used for looking up all photon positions, photon colors, and photon incident angles, as they are held as separate arrays (so far I do not know how to pass a struct array as a uniform.

So I spent the rest of the day trying to reduce the number of array look-ups. My solution was instead of using vector arrays, use a texture and do a sample. This was a very long process that took me the whole day. The first step was to figure out how I can go from an array of floats to an Image in Javascript. Turns out there isn't any image library for doing this (i.e., something like SOIL for C++), but finally I found that I could use the HTML canvas element. I can individually set pixel values of the canvas, and then I can export the canvas as an image data URL. Once, I have the data URL, I tried passing it to WebGL as a texture, (after several hours of researching) but turns out WebGL texture cannot be created from a data URL. The data URL must first be saved into a real image before it could be loaded as a GL texture. I had no other better idea (and I already spent too much time on this to give up), so I decided to save the texture out as a real file on the server using PHP, then load that file as a GL texture. After some time, I got it to work, and could correctly load a texture that was dynamically created out of an array of floats.

The data that I decided to convert to textures are the color values and the incident angles, because they are both between 1 and 0. The color values transferred nicely (as expected) but I had major issues with incident angles because values can be negative. I first tried to remap the values, such that 0-0.5 is negative, and 0.5-1.0 is positive. However, this led to much loss of data and the results did not come out correctly. Below is an image depicting this loss of data:

Next I decided to separate the magnitude and the sign into separate textures. I succeeded in passing these two textures to the shader, but I was not able to correctly convert them back or use them. I am suspecting that it is interpolation of the texture sampler that is messing it up; optimally I want discrete values, but GLSL interpolates the texture RGB values when being sampled, so it could be the source of the error.

So finally, I decided to leave incident angles out of the equation for now. With the improvements I made today, I am now able to have a total of 140 photons in the scene, almost triple the amount I had earlier. This is considered a good improvement, however, it is still nowhere near producing a good image.

Next I am planning to also convert photon positions into a texture, but this will require a much more involved conversion. But hopefully it will allow me to loop through even more photons. However, I did read that texture samples are really slow, so I am not sure about the effect on performance. Right now I am only doing one texture sample for the color per pixel, but if I were to use a texture for positions, it will be as many texture samples as there are photons.

Below is the current result, with the mentioned improvements and using the alternative photon scattering technique mentioned in the previous post.

Friday, November 25, 2011

Different Photon Casting

Experimenting with a different photon tracing technique, proposed by http://www.cc.gatech.edu/~phlosoft/photon/.

This technique is for fast visualization, using the photon map for both direct and indirect illumination. The photon power calculation is a little different that the conventional method. Instead of directly storing the power of the incoming photon, it does calculations to "absorb" some RGB values then stores it. So a blue wall would have blue photons stored on it, as opposed to white as usual. This allows for diffuse reflection (as usual) as well as direct illumination. (There is no Russian Roulette technique involved with this method.)

Here's just the photon map visualization with 1500 photons:



Here's a rough rendered result with 60 total photons:



The funny/cool thing is, you can kinda see the scene if you blur the photon viz above:



Encountering a Rather Big Problem...

I implemented photon gathering on the shader. A ray is first traced into the scene. Once it hits a surface, it looks for photons within a certain radius (spherical for now) and computes the radiance. This is computed by adding up the power (RGB channels) of all the photons, doing a dot product with the surface normal and the incident angle, and finally multiplying by a linear drop-off filter (I believe this is cone-filtering.)

Here is an initial result layered on top of ray tracing. 20 photons are initially casted, with final 49 photons in the global photon map.



The problem now is, I cannot increase the number of initial photons past 20 (on my laptop) or 35 (on a machine with decent graphics card.) The shader simply fails to initialize. I believe I am hitting some sort of temp variable limit, operations limit, or loop size limit. This is actually quite discouraging to me, since I need to cast at least 100 initial photons (~220 final photons) to make a decent image. I will further investigate the cause of the shader failure, and if there is a way around it.

Otherwise, if there exists such a barrier, I have a few backup ideas, but they will require that I majorly alter my framework:
  1. Instead of having one big global photon map that will take time to loop through, create "local" photon maps, one per object in the scene, and only loop through that local map to look up photons when computing radiance.
  2. Look at kd-tree for acceleration structure. It will help in reducing look-up time, but I do doubt that it will improve from 20 to 100. Also, I doubt this would be possible since GLSL ES does not allow random access to arrays.
  3. Discard the single viewing plane framework and render onto the actual textures of WebGL geometries. This is quite "crazy" and I'm not sure whether it will work; it will also require huge changes to my code - basically a rewrite of most of my current framework.
I will keep my fingers crossed and hope for the best, since this is very critical to the success of the project.

Thursday, November 17, 2011

Ready for Radiance Gathering!

Success! I was able to pass large amounts of photon data down to the shader.

This was the part I was worrying about for a while. When I initially had photon casting and rendered them on the shader, the performance was terrible (10-15 fps). I only had about 50-100 photons, which is nowhere near enough for a good render, and I didn't even have incident angle data - I was only passing down positions and color (power). This worried me. This week, with a completed photon scattering step, I started passing down the complete photon data (position, color, and incident angles) in much larger numbers, but I did not render them out. When I first ran it, I was expecting my laptop to burst into flames. However, it was completely okay. So I proofed that the shader can handle large amounts of floating point data without failing. This is a huge relief, and I am now ready to move onto radiance gathering on the shader! This is exciting!

The numbers I was working with was:
  • 500 photons initially cast from light source
  • ~1030 photons total after photon scattering
  • 3 attributes per photon: position, color (power), incident angle
  • 3 floats per attribute (each type of vec3)
  • ~1030 x 3 x 3 = ~9270 floats passed to shader
I also figured out a way to modify constant values on the shader. GLSL ES requires that arrays have a constant size (i.e., known by shader at compile time.) Up until now, I have been hard-coding array sizes as constants by hand (for shape count and photon count). With a Russian Roulette photon scattering, a variable array size is necessary since the total number of photons cannot be determined. I resolved this by modifying the shader code (the actual shader program string) using Javascript before it is used by WebGL. This works beautifully and I am now able to get a variable sized array for photons and for shapes.

Friday, November 11, 2011

Russian Roulette, RGB Version

What I did this week was to do correct photon absorption and power adjustment. This is done through a randomized method called Russian Roulette. Here's a quick summary: when a photon hits a surface, given the surface is not a 100% reflective (mirror), the photon will bounce off with less power. Instead of casting a photon with decreased power, Russian Roulette method casts the photon with a probability equivalent to the reflectivity of the surface. When integrating (averaging) photon powers to get the reflected radiance, the results are the same. This is an optimization since less photons will have to be cast total, and thus less would have to be integrated.

I implemented this in my photon scattering code on the Javascript side, and visualized it using WebGL, with colored lines representing the photon color (respective RGB power) and the incident direction.


At the moment I have the basic photon scattering and tracing down. The next step would be to do photon gathering or the radiance estimate. I am planning to implement this on the shader, per pixel; however, Norm mentioned that I should be doing this on the Javascript side as well since radiance estimate is also view-independent. I'm not too sure about this and will need to look into it further.

Saturday, November 5, 2011

Alpha Review Feedback

I have just gotten the comments back from our "senior review panel" at the Alpha Review. I think mostly it went well and most are satisfied with the progress so far. However, there seem to be two main issues, which I will address here:

1. Motivation for WebGL
The question that most reviewers had was what is the point of WebGL. How is WebGL better than, say, a web plug-in? What are the benefits of putting such a renderer on the web? Are there any web-based features that could make it special?

I think one of the factors here is that I did not realize most people do knot quite know what actually WebGL is, and thus I did not quite explain fully what WebGL is. I would need to give it it's own slide next time I do a presentation. In addition, I'll also have to motivate it more with concrete benefits such as:
  1. Being immediately accessible anywhere, since it is native to modern browsers
  2. Having direct access to the hardware through GLSL
  3. Exhibiting all capabilities of web-based applications, operating in a browser through Javascript, such that it could be very easily extended with other features

2. CPU / GPU Details
There are two things here that reviewers wanted to see: 1) CPU/GPU breakdown with clear division, and 2) justification for putting processes on the CPU/GPU. In general, I feel that is feedback is definitely true, and does somewhat reflect the state of my progress. It is this division of CPU/GPU where I was and still am figuring out. As I move along the process, I find out where the limitations are and what makes more sense, so it might seem unclear now because I still am trying to figure it out for myself.

But in general, I am feeling good about the feedback. Most reviewers are satisfied with the progress and feel that this project is technically challenging.

Thursday, November 3, 2011

Sidetracked: Ray Tracer

This week I sidetracked, and I spent most of yesterday and today trying to get a decent ray tracer running on my current framework. I was successful and was able to add refraction with a decent ray tracing depth. Everything in the ray tracing loop had to be hard-coded since recursion is not supported; this is why it took me so long to figure things out.

I'm also running into problems where my renderer wouldn't run in Chrome. I think I'm hitting some kind of memory cap, because when I tried to increase the objects in my scene, it stopped working in Chrome, only the empty room with nothing in it shows up. It still works in Firefox fine. This also has been happening inconsistently. My live example works for me in Chrome, but shows no objects on several other machines.

The live example can be found at http://iamnop.com/ray/. And here is a screen shot: