I just realized I could've applied the Russian Roulette optimization to the new photon scattering method as well. The fundamental difference between the literature photon scattering method (from Per Christensen's paper) and the Georgia Tech method is the photon that is being stored. The former stores the incoming photon, while the latter stores the outgoing photon. This is why the latter allows us to compute the direct illumination as well as the indirect illumination.
There are improvements. First is performance. I am able to cast more initial photons and end up with fewer photons in total. The results also have more subtle and pleasant diffuse reflections compared to before. (Notice the nice blue diffuse reflection below.)
Here's the result. 120 initial photons, 2 bounces, ~200 photons total.
Sunday, December 18, 2011
Reflection and Refraction
The performance took a huge hit. It now takes about 4 times as long to compile, and my laptop is about to burst into flames as it's running. Getting about 9 FPS. So the scene is a bit dark because I'm only using 50 photons.
One other minor improvement I did was to also add specular highlight as an extra step after photon gathering (similar to what I did with shadow). This shows up as the white hotspot on the glass sphere.
Results looks nice though. Here it is!
One other minor improvement I did was to also add specular highlight as an extra step after photon gathering (similar to what I did with shadow). This shows up as the white hotspot on the glass sphere.
Results looks nice though. Here it is!
Saturday, December 17, 2011
Graphics is all about Tricks!
Added really cheap "fake" shadows during photon gathering. Here's what I did.
Basically, after photon gathering is done and I have the final color value, I trace a shadow ray to the light source. If it's blocked, I multiply the gathered color value by a fractional constant, making it dimmer. That's it! This can also be easily extended to soft shadows, but it might be more expensive (and that'll encroaching on the area of monte carlo path tracing.)
Looks pretty convincing as far as I can tell, and this is much cheaper than the blending between full out ray tracing and photon gathering that I had earlier.
Basically, after photon gathering is done and I have the final color value, I trace a shadow ray to the light source. If it's blocked, I multiply the gathered color value by a fractional constant, making it dimmer. That's it! This can also be easily extended to soft shadows, but it might be more expensive (and that'll encroaching on the area of monte carlo path tracing.)
Looks pretty convincing as far as I can tell, and this is much cheaper than the blending between full out ray tracing and photon gathering that I had earlier.
Per Geometry Caching (Comparison)
When comparing per geometry caching with previous results, at 200 photons, I notice a few things:
- Performance is roughly the same, with ~9 FPS on my laptop.
- Per Geometry method has a much longer shader compile time, which makes sense because we now have all those if statements.
- Per Geometry method generally produces much nicer can cleaner results. Results from before look a little "dirty" because it seems as if there are paint splats everywhere.
- Per Geometry method also has much sharper edges because the colors cannot bleed over.
Per Geometry Caching (Completed)
Continuing the work from the previous post. I completed the rendering side of per geometry caching. Unfortunately, I wasn't able to come up with a better way than one mentioned in the last post, with brute-force if statements per geometry. As a result, there wasn't any speed increase that was expected. Hopefully I can find a better way to do this and increase performance that we were supposed to get.
The results look better as expected, with no unintentional color bleeding from surface to surface. Here is a screenshot of it. 100 photons, no bounces (which means there should be no diffuse color bleeding anywhere.)
The results look better as expected, with no unintentional color bleeding from surface to surface. Here is a screenshot of it. 100 photons, no bounces (which means there should be no diffuse color bleeding anywhere.)
Saturday, December 10, 2011
Per Geometry Caching (In Progress)
Started working on implementing per geometry photon caching. This is basically storing photons at each geometry instead of at a global scene level. This will help with performance during the gathering step, since we will now only have to look at photons on the geometry, and not photons in the whole scene. It will also eliminate unintended color bleeding since only photons on the geometry will effect the color of that geometry.
I reasoned that if I create a new photon list for each geometry in the scene to store its photons, it will be quite inefficient, since I will need three of these lists and each list is actually a texture at the GLSL level. So I decided to tackle problem by keeping my global photon list, but partition them so that each part correspond to a geometry (e.g., photon0-18 are on geometry0, photons19-42 are on geometry1, and so on.)
I am able to do this successfully, and visualize it (since the data is passed down as a texture.)
However, const arrays are not allowed in GLSL ES. So my next alternative (quite nasty Javascript hack) is to have Javascript hard-code multiple IF statements:
I'm not sure if there's a smarter way to do this. This would be quite a few IF statements, one per geometry, and it would not scale well. And if the hardware would visit every inner IF statements anyway (something I'd have to figure out), then this would not improve performance at all (plus it would make it worse because of all the conditionals.)
I reasoned that if I create a new photon list for each geometry in the scene to store its photons, it will be quite inefficient, since I will need three of these lists and each list is actually a texture at the GLSL level. So I decided to tackle problem by keeping my global photon list, but partition them so that each part correspond to a geometry (e.g., photon0-18 are on geometry0, photons19-42 are on geometry1, and so on.)
I am able to do this successfully, and visualize it (since the data is passed down as a texture.)
The first row is color, second is position, and third is angles. It is clear that data is partitioned. For example, take a look at the top row representing color. The first section is all yellow; this corresponds to geometry0 which is a yellow wall, so its photon colors must be yellow. Next is the opposite blue wall, then there are 4 other gray walls/floors, and finally the three shapes in the scene, colored red, green, and purple respectively.
The next step is to do a look up in the shader. In a regular programming language, this would be trivial, but GLSL ES does not allow variable array indexing, or variable loop lengths. This is my current problem: I have to figure out how to efficiently do a photon lookup within those partitions in GLSL.
I was going to do:
// idx - index of geometry
for (int i=PHOTON_START[idx]; i<PHOTON_END[idx]; i++) {
// gather
photons, compute color
}
However, const arrays are not allowed in GLSL ES. So my next alternative (quite nasty Javascript hack) is to have Javascript hard-code multiple IF statements:
if (idx
== 0) {
for (int i=PHOTON_START_01; i<PHOTON_END_01; i++) {
// compute
color...
}
}
if (idx
== 1) {
for (int i=PHOTON_START_02; i<PHOTON_END_02; i++) {
// compute
color...
}
}
...
I'm not sure if there's a smarter way to do this. This would be quite a few IF statements, one per geometry, and it would not scale well. And if the hardware would visit every inner IF statements anyway (something I'd have to figure out), then this would not improve performance at all (plus it would make it worse because of all the conditionals.)
Friday, December 9, 2011
Refraction...
Got refraction working at real time. Doesn't look quite realistic without surface reflections though...
Also, apparently, this runs on Chrome now. No idea why it works.
(80 photons, no bounces)
Also, apparently, this runs on Chrome now. No idea why it works.
(80 photons, no bounces)
Gaussian Looks Nice!
Since the week was the last week of school, I wasn't able to do that much, but the one thing I accomplished was to implement Gaussian filtering
Gaussian filtering is used for photon gathering. Needed to tweak a few operations and constants here and there before I could get the image to look nice. Definitely much smoother than the cone filtering that I had before.
Here's the image! (100 photons, no bounces)
Gaussian filtering is used for photon gathering. Needed to tweak a few operations and constants here and there before I could get the image to look nice. Definitely much smoother than the cone filtering that I had before.
Here's the image! (100 photons, no bounces)
Friday, December 2, 2011
Beta Review and Future Plan
Today I had my Beta Review with Joe and Norm, and I'd say it went pretty well. Norm was generally satisfied with the results, mainly because I am able to produce a decent image at real-time with relatively very few photons, despite that various corners that has been cut and many "cheats" that I did on my side of things.
There were a few good points that came out of the conversation:
Must:
Dec 4 - 10
There were a few good points that came out of the conversation:
- Gaussian filtering might produce better results when doing photon gathering.
- Caching photons per surface will produce sharper corners, eliminate unintended bleeding, and speed up photon search.
- I should look into an acceleration structure now that I have random access using textures.
- I should talk to Joe about how to optimize branches on GLSL.
- I should look into Arvo's paper on caching rays on 5D space (???)
Must:
- Gaussian fall-off
- Complete global illumination with mirror and transparent materials
- Per surface photon caching (at least investigate)
- Some kind of acceleration structure (at least investigate)
- Interactive user interface, allows user to modify scene
Dec 4 - 10
- Gaussian fall-off for photon gathering
- Photon caching per surface
- Acceleration structure
- Reflection and refraction
- User interactivity
- Final user interface work
- Movie presentation
Self Evaluation
We're at the point of Beta Review, and it's time to do a self evaluation of how the project has been coming along.
Looking back at what I set as the original goals, they were quite reasonable. After all, my project is only to implement global illumination with photon mapping - pretty standard stuff. However, there are still many features that I had left out and not implemented. Why? I think that the one aspect of this project that is particularly lacking was the initial research and planning of the framework I will be using. Most of the time spent on the project was not on the global illumination algorithm, but debugging and compromising for the nature of the framework. More specifically, there were many limitations of GLSL ES that I did not foresee that made many standard features of a scene graph or photon mapping very tricky and difficult to implement. With more thorough research and thinking, and planning through every step of the project, I believe the project would have gone much more smoothly. Nonetheless, I managed to make certain compromises and find various alternative methods along the way that allowed me to build some kind of global illumination renderer on WebGL.
The following is a list of features I have today:
After the Beta Review tomorrow, I will post more details about the future timeline of the next few weeks.
Looking back at what I set as the original goals, they were quite reasonable. After all, my project is only to implement global illumination with photon mapping - pretty standard stuff. However, there are still many features that I had left out and not implemented. Why? I think that the one aspect of this project that is particularly lacking was the initial research and planning of the framework I will be using. Most of the time spent on the project was not on the global illumination algorithm, but debugging and compromising for the nature of the framework. More specifically, there were many limitations of GLSL ES that I did not foresee that made many standard features of a scene graph or photon mapping very tricky and difficult to implement. With more thorough research and thinking, and planning through every step of the project, I believe the project would have gone much more smoothly. Nonetheless, I managed to make certain compromises and find various alternative methods along the way that allowed me to build some kind of global illumination renderer on WebGL.
The following is a list of features I have today:
- Scene list
- Ray tracing, with reflections and refractions
- Photon scattering
- Rendering with photon gathering (with only full reflection or refraction)
- Real-time interactive user-controlled camera
- Scene graph - instead using a simple scene list
- Photon scattering and rendering - not using standard literature technique
- Interactive user interface - allow users to modify the scene
- Irradiance Caching
- Caustics
After the Beta Review tomorrow, I will post more details about the future timeline of the next few weeks.
Thursday, December 1, 2011
Lookin' Not Bad!
Improvements:
And here's an experiment on mirrored surface. 100 photons only (performance dies at 200).
- Figured out I can dynamically load textures using a built-in class from Three.js (could've saved me >6 hours if I had figured this out earlier.)
- Loaded all photon values (position, color, and incident angles) through textures and for look up on shader. Turns out precision loss isn't a big problem.
- Played with photon gathering equation. Increased radius, and dimmed the light contribution.
And here's an experiment on mirrored surface. 100 photons only (performance dies at 200).
Sunday, November 27, 2011
Attempt to Optimize
Today I was trying to answer two questions:
So I spent the rest of the day trying to reduce the number of array look-ups. My solution was instead of using vector arrays, use a texture and do a sample. This was a very long process that took me the whole day. The first step was to figure out how I can go from an array of floats to an Image in Javascript. Turns out there isn't any image library for doing this (i.e., something like SOIL for C++), but finally I found that I could use the HTML canvas element. I can individually set pixel values of the canvas, and then I can export the canvas as an image data URL. Once, I have the data URL, I tried passing it to WebGL as a texture, (after several hours of researching) but turns out WebGL texture cannot be created from a data URL. The data URL must first be saved into a real image before it could be loaded as a GL texture. I had no other better idea (and I already spent too much time on this to give up), so I decided to save the texture out as a real file on the server using PHP, then load that file as a GL texture. After some time, I got it to work, and could correctly load a texture that was dynamically created out of an array of floats.
The data that I decided to convert to textures are the color values and the incident angles, because they are both between 1 and 0. The color values transferred nicely (as expected) but I had major issues with incident angles because values can be negative. I first tried to remap the values, such that 0-0.5 is negative, and 0.5-1.0 is positive. However, this led to much loss of data and the results did not come out correctly. Below is an image depicting this loss of data:
Next I decided to separate the magnitude and the sign into separate textures. I succeeded in passing these two textures to the shader, but I was not able to correctly convert them back or use them. I am suspecting that it is interpolation of the texture sampler that is messing it up; optimally I want discrete values, but GLSL interpolates the texture RGB values when being sampled, so it could be the source of the error.
So finally, I decided to leave incident angles out of the equation for now. With the improvements I made today, I am now able to have a total of 140 photons in the scene, almost triple the amount I had earlier. This is considered a good improvement, however, it is still nowhere near producing a good image.
Next I am planning to also convert photon positions into a texture, but this will require a much more involved conversion. But hopefully it will allow me to loop through even more photons. However, I did read that texture samples are really slow, so I am not sure about the effect on performance. Right now I am only doing one texture sample for the color per pixel, but if I were to use a texture for positions, it will be as many texture samples as there are photons.
Below is the current result, with the mentioned improvements and using the alternative photon scattering technique mentioned in the previous post.
- Is the limit real? Is there a limit on the number of operations or loops? What is the one breaking the code?
- Can I optimize the photon gathering code so that more photons can be used?
So I spent the rest of the day trying to reduce the number of array look-ups. My solution was instead of using vector arrays, use a texture and do a sample. This was a very long process that took me the whole day. The first step was to figure out how I can go from an array of floats to an Image in Javascript. Turns out there isn't any image library for doing this (i.e., something like SOIL for C++), but finally I found that I could use the HTML canvas element. I can individually set pixel values of the canvas, and then I can export the canvas as an image data URL. Once, I have the data URL, I tried passing it to WebGL as a texture, (after several hours of researching) but turns out WebGL texture cannot be created from a data URL. The data URL must first be saved into a real image before it could be loaded as a GL texture. I had no other better idea (and I already spent too much time on this to give up), so I decided to save the texture out as a real file on the server using PHP, then load that file as a GL texture. After some time, I got it to work, and could correctly load a texture that was dynamically created out of an array of floats.
The data that I decided to convert to textures are the color values and the incident angles, because they are both between 1 and 0. The color values transferred nicely (as expected) but I had major issues with incident angles because values can be negative. I first tried to remap the values, such that 0-0.5 is negative, and 0.5-1.0 is positive. However, this led to much loss of data and the results did not come out correctly. Below is an image depicting this loss of data:
Next I decided to separate the magnitude and the sign into separate textures. I succeeded in passing these two textures to the shader, but I was not able to correctly convert them back or use them. I am suspecting that it is interpolation of the texture sampler that is messing it up; optimally I want discrete values, but GLSL interpolates the texture RGB values when being sampled, so it could be the source of the error.
So finally, I decided to leave incident angles out of the equation for now. With the improvements I made today, I am now able to have a total of 140 photons in the scene, almost triple the amount I had earlier. This is considered a good improvement, however, it is still nowhere near producing a good image.
Next I am planning to also convert photon positions into a texture, but this will require a much more involved conversion. But hopefully it will allow me to loop through even more photons. However, I did read that texture samples are really slow, so I am not sure about the effect on performance. Right now I am only doing one texture sample for the color per pixel, but if I were to use a texture for positions, it will be as many texture samples as there are photons.
Below is the current result, with the mentioned improvements and using the alternative photon scattering technique mentioned in the previous post.
Friday, November 25, 2011
Different Photon Casting
Experimenting with a different photon tracing technique, proposed by http://www.cc.gatech.edu/~phlosoft/photon/.
This technique is for fast visualization, using the photon map for both direct and indirect illumination. The photon power calculation is a little different that the conventional method. Instead of directly storing the power of the incoming photon, it does calculations to "absorb" some RGB values then stores it. So a blue wall would have blue photons stored on it, as opposed to white as usual. This allows for diffuse reflection (as usual) as well as direct illumination. (There is no Russian Roulette technique involved with this method.)
Here's just the photon map visualization with 1500 photons:
Here's a rough rendered result with 60 total photons:
The funny/cool thing is, you can kinda see the scene if you blur the photon viz above:
This technique is for fast visualization, using the photon map for both direct and indirect illumination. The photon power calculation is a little different that the conventional method. Instead of directly storing the power of the incoming photon, it does calculations to "absorb" some RGB values then stores it. So a blue wall would have blue photons stored on it, as opposed to white as usual. This allows for diffuse reflection (as usual) as well as direct illumination. (There is no Russian Roulette technique involved with this method.)
Here's just the photon map visualization with 1500 photons:
Here's a rough rendered result with 60 total photons:
The funny/cool thing is, you can kinda see the scene if you blur the photon viz above:
Encountering a Rather Big Problem...
I implemented photon gathering on the shader. A ray is first traced into the scene. Once it hits a surface, it looks for photons within a certain radius (spherical for now) and computes the radiance. This is computed by adding up the power (RGB channels) of all the photons, doing a dot product with the surface normal and the incident angle, and finally multiplying by a linear drop-off filter (I believe this is cone-filtering.)
Here is an initial result layered on top of ray tracing. 20 photons are initially casted, with final 49 photons in the global photon map.
The problem now is, I cannot increase the number of initial photons past 20 (on my laptop) or 35 (on a machine with decent graphics card.) The shader simply fails to initialize. I believe I am hitting some sort of temp variable limit, operations limit, or loop size limit. This is actually quite discouraging to me, since I need to cast at least 100 initial photons (~220 final photons) to make a decent image. I will further investigate the cause of the shader failure, and if there is a way around it.
Otherwise, if there exists such a barrier, I have a few backup ideas, but they will require that I majorly alter my framework:
Here is an initial result layered on top of ray tracing. 20 photons are initially casted, with final 49 photons in the global photon map.
The problem now is, I cannot increase the number of initial photons past 20 (on my laptop) or 35 (on a machine with decent graphics card.) The shader simply fails to initialize. I believe I am hitting some sort of temp variable limit, operations limit, or loop size limit. This is actually quite discouraging to me, since I need to cast at least 100 initial photons (~220 final photons) to make a decent image. I will further investigate the cause of the shader failure, and if there is a way around it.
Otherwise, if there exists such a barrier, I have a few backup ideas, but they will require that I majorly alter my framework:
- Instead of having one big global photon map that will take time to loop through, create "local" photon maps, one per object in the scene, and only loop through that local map to look up photons when computing radiance.
- Look at kd-tree for acceleration structure. It will help in reducing look-up time, but I do doubt that it will improve from 20 to 100. Also, I doubt this would be possible since GLSL ES does not allow random access to arrays.
- Discard the single viewing plane framework and render onto the actual textures of WebGL geometries. This is quite "crazy" and I'm not sure whether it will work; it will also require huge changes to my code - basically a rewrite of most of my current framework.
Thursday, November 17, 2011
Ready for Radiance Gathering!
Success! I was able to pass large amounts of photon data down to the shader.
This was the part I was worrying about for a while. When I initially had photon casting and rendered them on the shader, the performance was terrible (10-15 fps). I only had about 50-100 photons, which is nowhere near enough for a good render, and I didn't even have incident angle data - I was only passing down positions and color (power). This worried me. This week, with a completed photon scattering step, I started passing down the complete photon data (position, color, and incident angles) in much larger numbers, but I did not render them out. When I first ran it, I was expecting my laptop to burst into flames. However, it was completely okay. So I proofed that the shader can handle large amounts of floating point data without failing. This is a huge relief, and I am now ready to move onto radiance gathering on the shader! This is exciting!
The numbers I was working with was:
This was the part I was worrying about for a while. When I initially had photon casting and rendered them on the shader, the performance was terrible (10-15 fps). I only had about 50-100 photons, which is nowhere near enough for a good render, and I didn't even have incident angle data - I was only passing down positions and color (power). This worried me. This week, with a completed photon scattering step, I started passing down the complete photon data (position, color, and incident angles) in much larger numbers, but I did not render them out. When I first ran it, I was expecting my laptop to burst into flames. However, it was completely okay. So I proofed that the shader can handle large amounts of floating point data without failing. This is a huge relief, and I am now ready to move onto radiance gathering on the shader! This is exciting!
The numbers I was working with was:
- 500 photons initially cast from light source
- ~1030 photons total after photon scattering
- 3 attributes per photon: position, color (power), incident angle
- 3 floats per attribute (each type of vec3)
- ~1030 x 3 x 3 = ~9270 floats passed to shader
Friday, November 11, 2011
Russian Roulette, RGB Version
What I did this week was to do correct photon absorption and power adjustment. This is done through a randomized method called Russian Roulette. Here's a quick summary: when a photon hits a surface, given the surface is not a 100% reflective (mirror), the photon will bounce off with less power. Instead of casting a photon with decreased power, Russian Roulette method casts the photon with a probability equivalent to the reflectivity of the surface. When integrating (averaging) photon powers to get the reflected radiance, the results are the same. This is an optimization since less photons will have to be cast total, and thus less would have to be integrated.
I implemented this in my photon scattering code on the Javascript side, and visualized it using WebGL, with colored lines representing the photon color (respective RGB power) and the incident direction.
At the moment I have the basic photon scattering and tracing down. The next step would be to do photon gathering or the radiance estimate. I am planning to implement this on the shader, per pixel; however, Norm mentioned that I should be doing this on the Javascript side as well since radiance estimate is also view-independent. I'm not too sure about this and will need to look into it further.
I implemented this in my photon scattering code on the Javascript side, and visualized it using WebGL, with colored lines representing the photon color (respective RGB power) and the incident direction.
At the moment I have the basic photon scattering and tracing down. The next step would be to do photon gathering or the radiance estimate. I am planning to implement this on the shader, per pixel; however, Norm mentioned that I should be doing this on the Javascript side as well since radiance estimate is also view-independent. I'm not too sure about this and will need to look into it further.
Saturday, November 5, 2011
Alpha Review Feedback
I have just gotten the comments back from our "senior review panel" at the Alpha Review. I think mostly it went well and most are satisfied with the progress so far. However, there seem to be two main issues, which I will address here:
1. Motivation for WebGL
The question that most reviewers had was what is the point of WebGL. How is WebGL better than, say, a web plug-in? What are the benefits of putting such a renderer on the web? Are there any web-based features that could make it special?
I think one of the factors here is that I did not realize most people do knot quite know what actually WebGL is, and thus I did not quite explain fully what WebGL is. I would need to give it it's own slide next time I do a presentation. In addition, I'll also have to motivate it more with concrete benefits such as:
2. CPU / GPU Details
There are two things here that reviewers wanted to see: 1) CPU/GPU breakdown with clear division, and 2) justification for putting processes on the CPU/GPU. In general, I feel that is feedback is definitely true, and does somewhat reflect the state of my progress. It is this division of CPU/GPU where I was and still am figuring out. As I move along the process, I find out where the limitations are and what makes more sense, so it might seem unclear now because I still am trying to figure it out for myself.
But in general, I am feeling good about the feedback. Most reviewers are satisfied with the progress and feel that this project is technically challenging.
1. Motivation for WebGL
The question that most reviewers had was what is the point of WebGL. How is WebGL better than, say, a web plug-in? What are the benefits of putting such a renderer on the web? Are there any web-based features that could make it special?
I think one of the factors here is that I did not realize most people do knot quite know what actually WebGL is, and thus I did not quite explain fully what WebGL is. I would need to give it it's own slide next time I do a presentation. In addition, I'll also have to motivate it more with concrete benefits such as:
- Being immediately accessible anywhere, since it is native to modern browsers
- Having direct access to the hardware through GLSL
- Exhibiting all capabilities of web-based applications, operating in a browser through Javascript, such that it could be very easily extended with other features
2. CPU / GPU Details
There are two things here that reviewers wanted to see: 1) CPU/GPU breakdown with clear division, and 2) justification for putting processes on the CPU/GPU. In general, I feel that is feedback is definitely true, and does somewhat reflect the state of my progress. It is this division of CPU/GPU where I was and still am figuring out. As I move along the process, I find out where the limitations are and what makes more sense, so it might seem unclear now because I still am trying to figure it out for myself.
But in general, I am feeling good about the feedback. Most reviewers are satisfied with the progress and feel that this project is technically challenging.
Thursday, November 3, 2011
Sidetracked: Ray Tracer
This week I sidetracked, and I spent most of yesterday and today trying to get a decent ray tracer running on my current framework. I was successful and was able to add refraction with a decent ray tracing depth. Everything in the ray tracing loop had to be hard-coded since recursion is not supported; this is why it took me so long to figure things out.
I'm also running into problems where my renderer wouldn't run in Chrome. I think I'm hitting some kind of memory cap, because when I tried to increase the objects in my scene, it stopped working in Chrome, only the empty room with nothing in it shows up. It still works in Firefox fine. This also has been happening inconsistently. My live example works for me in Chrome, but shows no objects on several other machines.
The live example can be found at http://iamnop.com/ray/. And here is a screen shot:
I'm also running into problems where my renderer wouldn't run in Chrome. I think I'm hitting some kind of memory cap, because when I tried to increase the objects in my scene, it stopped working in Chrome, only the empty room with nothing in it shows up. It still works in Firefox fine. This also has been happening inconsistently. My live example works for me in Chrome, but shows no objects on several other machines.
The live example can be found at http://iamnop.com/ray/. And here is a screen shot:
Monday, October 31, 2011
Initial Photon Scattering
Finally, after many battles, I have simple photon scattering.
First, I ported my existing intersection code from the shader to Javascript. I realize that this is code duplication, but I feel that it is necessary since I see that photon mapping should be done on the CPU side. (Also, this is a question that should be addressed later.) Next I moved the scene list over to Javascript, which is then passed down to the shader. Here, I didn't handle shape types -- for shapes, I have lists of positions, radii, and colors; I didn't want to add another for types. So at the moment photon scattering only works with spheres. To support cubes, I'm planning to pass a different list down to the shader for each shape. Probably not a good idea for extensibility, but it is the most efficient in terms of memory.
Then I performed photon scattering by shooting rays, storing their position and color as they hit objects, and recursing on the reflected rays. At this point, I don't have the correct power/color computation. I'm only directly storing color as they are absorbed, just to see if I can do it correctly.
Next step would be to do correct photon scattering, which involves correct power adjustment and Russian Roulette method for absorption.
Here is a screen shot of photons visualized over a ray traced scene. The color showing is the color of the previous reflected surface, and each photon can be traced correctly.
First, I ported my existing intersection code from the shader to Javascript. I realize that this is code duplication, but I feel that it is necessary since I see that photon mapping should be done on the CPU side. (Also, this is a question that should be addressed later.) Next I moved the scene list over to Javascript, which is then passed down to the shader. Here, I didn't handle shape types -- for shapes, I have lists of positions, radii, and colors; I didn't want to add another for types. So at the moment photon scattering only works with spheres. To support cubes, I'm planning to pass a different list down to the shader for each shape. Probably not a good idea for extensibility, but it is the most efficient in terms of memory.
Then I performed photon scattering by shooting rays, storing their position and color as they hit objects, and recursing on the reflected rays. At this point, I don't have the correct power/color computation. I'm only directly storing color as they are absorbed, just to see if I can do it correctly.
Next step would be to do correct photon scattering, which involves correct power adjustment and Russian Roulette method for absorption.
Here is a screen shot of photons visualized over a ray traced scene. The color showing is the color of the previous reflected surface, and each photon can be traced correctly.
Monday, October 24, 2011
More Frustration...
Three.js has many useful built-in classes. It even has a Ray class and a Scene class complete with objects, meshes, and intersection code.
Today I was trying to use the built in intersection functionality for photon scattering. Turns out, I spent +5 hours (and counting) trying to figure it out. It should be very simple but for some reason I am not getting any intersections.
I've been posting questions on github asking for help. Here's what's going on:
https://github.com/mrdoob/three.js/issues/680
Today I was trying to use the built in intersection functionality for photon scattering. Turns out, I spent +5 hours (and counting) trying to figure it out. It should be very simple but for some reason I am not getting any intersections.
I've been posting questions on github asking for help. Here's what's going on:
https://github.com/mrdoob/three.js/issues/680
Sunday, October 23, 2011
Passing Arrays to Shader
Finally, I have decided to do photon scattering on the CPU side.
Therefore I would need to figure out how to pass an array of photons to the shader after they have been scattered. Today I spent 4 hours trying to figure out how do pass arrays to the shader.
Since you cannot pass arrays as uniforms into the shader itself, I would need to construct a texture out of photon data and sample it for values. I'm not sure that I could construct a texture and pass it to the shader in Three.js, so I rebuilt my existing Three.js renderer in just pure WebGL, since I know I can construct a texture in OpenGL.
After 4 hours, I figured things out and completed a version of my renderer on WebGL. However, at that point, I also found out that you can pass arrays to the shader in Three.js very easily, just like passing any other types of data.
So it looks like I just wasted 4 hours that I could've spent on the actual photon scattering.
Therefore I would need to figure out how to pass an array of photons to the shader after they have been scattered. Today I spent 4 hours trying to figure out how do pass arrays to the shader.
Since you cannot pass arrays as uniforms into the shader itself, I would need to construct a texture out of photon data and sample it for values. I'm not sure that I could construct a texture and pass it to the shader in Three.js, so I rebuilt my existing Three.js renderer in just pure WebGL, since I know I can construct a texture in OpenGL.
After 4 hours, I figured things out and completed a version of my renderer on WebGL. However, at that point, I also found out that you can pass arrays to the shader in Three.js very easily, just like passing any other types of data.
So it looks like I just wasted 4 hours that I could've spent on the actual photon scattering.
Thursday, October 20, 2011
Where to Build Photon Map?
As I sat down to carefully read about photon mapping and think it through, one big question popped up that is stopping me from moving on:
Where should I be building this photon map?
The photon map is view independent, and is static unless something in the scene changes. Thus, it makes sense to build it on the CPU side (Javascript) then pass it down to the shader to do per-pixel ray tracing. However, I have read a few articles (including John CIS565 lecture) that mentions constructing the photon map on the shader, and how it can give a better performance.
Right now, CPU side construction makes more logical sense to me. However, if I choose to do this, I would have to re-implement all the intersection code for shapes, and move the scene from shader back to Javascript, which will be a bit more work. I also will need to look into how I can pass non-primitive data structures down to the shader.
Anyone who knows anything about this please advise?
Where should I be building this photon map?
The photon map is view independent, and is static unless something in the scene changes. Thus, it makes sense to build it on the CPU side (Javascript) then pass it down to the shader to do per-pixel ray tracing. However, I have read a few articles (including John CIS565 lecture) that mentions constructing the photon map on the shader, and how it can give a better performance.
Right now, CPU side construction makes more logical sense to me. However, if I choose to do this, I would have to re-implement all the intersection code for shapes, and move the scene from shader back to Javascript, which will be a bit more work. I also will need to look into how I can pass non-primitive data structures down to the shader.
Anyone who knows anything about this please advise?
Tuesday, October 18, 2011
I haz a Cornell Box!
Ray traced. Reflection depth 2. Results look correct. Should be ready for the next step.
The code is live at http://iamnop.co.cc/webgl
The code is live at http://iamnop.co.cc/webgl
Subscribe to:
Posts (Atom)