Sunday, June 8, 2014

WebGL GPU Particles

Weekend project I started about a month ago, a WebGL GPU particle simulator, simulating 1 million particles at 60 fps!



Live code: http://iamnop.com/particles-mrt/
Click to set the gravity point. Hold Alt for Maya camera controls.
(Requires webgl_draw_buffers extension. More info on how to enable.)

Video: https://www.youtube.com/watch?v=IyM0YxizdnY

GitHub: https://github.com/nopjia/particles-mrt

Technical Details
This demo can be broken into two parts: 1) the compute and 2) the view.

The compute part is where most of the work happens to simulate the particles. There are three textures representing each of the particles states: position, velocity, and color. Each particle is then represented by a UV coordinate into these textures, essentially a pixel, where its states are stored. A point to note here is that the position and velocity textures have to be float textures in order to represent the large range of numbers. At first I tried scaling up the range of a regular byte texture but there just wasn't enough resolution for the simulation.

The heavy lifting is done through a GLSL shader which I use as a compute shader. It reads in the three state textures (nearest filtering of course, wouldn't make sense otherwise) and performs a simulation step. Then it writes the results back to the corresponding framebuffers, all at once thanks to the webgl_draw_buffers extension. Because OpenGL doesn't allow writing to the same textures that you are reading from, I needed a duplicate set of framebuffers/textures to swap between every frame.

The view part is responsible for the actual drawing of the particles onto the screen. The number of particles is determined by the size of the compute textures (e.g., 1024x1024 textures would give us ~1 million particles!) As mentioned earlier, each particle is represented only by a pixel location in the state textures, so all I had to do for each vertex is to submit a UV coordinate. Then in the vertex shader, I simply use the UV to look up its position from the position texture and its color from the color texture. (The velocity texture is only used by the compute shader to run the simulation and isn't used here.)

In order to ensure smoothness, the simulation runs on a different loop than the rendering. The simulation step has to run at a fixed time step, independent of the frame rate, otherwise a variable time step would introduce irregularities in the simulation, which becomes quite apparent when there are a million particles. Putting the simulation on a fixed update step not only produces smooth results but also allows for consistent results no matter the frame rate.

WIP!
This is a work in progress, and I'm planning to add more features. Most important is to add support for not having the webgl_draw_buffers extension so any up-to-date browser can run this. Then I'm planning to add a few post-processing effects like ghost trails and bloom to make it look even more awesome. Finally, I'll need better UI to expose more settings to play with!

Note on Live Demo!
It is not guaranteed to work on all machines! The demo is using a draft extension which might not be supported on your hardware. Here's how to try to get it to work:
  1. Get the latest version of Chrome
  2. Go to chrome://flags/
  3. Enable "WebGL Draft Extensions" (chrome://flags/#enable-webgl-draft-extensions)
  4. For Windows, you might also need to enable D3D11 (chrome://flags/#enable-d3d11)
If it doesn't work, try again with Firefox? I'm sorry but I promise I will add support for no extension soon!

Thursday, July 25, 2013

CUDA Path Tracer

Interactive real-time path tracer in CUDA.
Advanced Rendering Seminar
University of Pennsylvania, Spring 2013.
 
My last school project, ever! (hopefully)
 

Github: https://github.com/nopjia/tracer
Images: Dropbox Link
Video: http://www.youtube.com/watch?v=mbpqxlJHaBE

This was my first ever attempt at writing a path tracer, and of course, making things real-time (or at least somewhat) is always more fun. It was something I've always wanted to do. I learned a whole lot about physically based rendering, for obvious reasons. But I also learned a lot about coding in CUDA and its in's and out's, which involved writing a lot of pure C code and writing nifty inline structs and functions for both host and device uses.

Although, the renderer doesn't have fancy features, I'm quite happy with it. Despite the lack of acceleration structures, it still runs at ~20fps, and converges around 30 seconds, on my laptop. The next area of improvement would definitely be adding a kd-tree, in order to support larger scenes.

Finally, as usual, I decided to make things interesting by making it interactive. Being able to easily modify the scene on the fly really makes it more fun and creates lots of interesting images.

Head over to the Github page for a whole lot more info!

Thursday, May 16, 2013

WebTube

Here is a long overdue post that I finally got around to write now that school is over! Over the weekend of January 18-20th, I attended the Spring 2013 PennApps Hackathon, the largest student-run hackathon on the east coast, organized by no other than the University of Pennsylvania. Since this was to be my last PennApps Hackathon (this being my 5th time), I decided to have fun over the weekend, hang out with people, and do something fun.

I made WebTube, a fun little web app that lets you browse the internet through the comfort of an old-school CRT TV.

The reception turned out much better than I had expected. I came into the hackathon only to have some fun and not expecting anything. In the end, WebTube won 3rd place overall and Audience Choice Award!

Demo video: http://www.youtube.com/watch?v=8ZUCyN6yvps
Live code: http://iamnop.com/webtube/ (must enable CSS Shaders flag in Chrome)


Technical Details

The core component of this web app is CSS Shaders, which allows access to programmable vertex and fragment shaders for HTML DOM elements. CSS Shaders are an extension to CSS Filters, which are a set of image filters available for CSS, such as grayscale, sepia, brightness/contrast, hue/saturation, etc. CSS Shaders are essentially custom filters that can be controlled through vertex and fragment shaders. This allows for a much greater freedom and very interesting CSS animations and CSS transitions. (See CSS FilterLab for live examples.) Please see related links below for more information related to CSS Shaders.

With understanding of CSS Shaders, WebTube is a very simple app. It has one main DOM for the screen, which allows for different modes, including web and SSH terminal. Multiple CSS transition rules are applied to the screen using CSS Shaders. The vertex shader warps the screen to produce the curvature and the fragment shader applies TV-like effects on the screen.

Links

http://www.adobe.com/devnet/html5/articles/css-shaders.html
http://html.adobe.com/webplatform/graphics/customfilters/cssfilterlab/
http://alteredqualia.com/css-shaders/article/
https://dvcs.w3.org/hg/FXTF/raw-file/tip/filters/index.html#shading-language

Thursday, September 20, 2012

WebGL Volumetric Renderer


 

Live code: iamnop.com/volumetric
(Does not run Windows. Unknown bug with shader compiling.)

Video: http://www.youtube.com/watch?v=VPhnwOpmUqY

GitHub: https://github.com/nopjia/WebGL-Volumetric

As a small side project, I implemented volumetric ray casting in WebGL. The idea is to ray cast a 3D texture, which should be very fast since OpenGL texture lookups are highly optimized; however, this is not doable since 3D textures are not allowed in WebGL.

During SIGGRAPH 2012, I met Luis Kabongo from VICOMTech, who showed me their implementation of a WebGL volume renderer, which they use for medical imaging. They made it possible by, instead of using a 3D texture, using a 2D texture atlas, which is made up of 2D texture slices. This is a very simple solution. I immediate took this idea and implemented my own version.

VICOMTech's volume renderer is for simply viewing 3D datasets, so they implemented it using alpha compositing with no lighting.

I took it a step further for my own renderer and implemented physically-based volume rendering. The lighting model is based on light transmittance. I introduced multiple lights into the scene and use exponential fall-off to calculate transmittance.

As a result, the renderer features physically-based lighting, volumetric shadows, and support for multiple lights. The results turned out very nicely.

Tuesday, September 18, 2012

CodeDJ Live!

Live code here.
(Only works on Firefox, since it uses Audio Data API.)
The app is not most intuitive and has a tiny learning curve.
Please refer to more info section.

Video: http://www.youtube.com/watch?v=bl11T112Jn0

 


A project I did for the PennApps 48-hour hackathon.

About
CodeDJ Live! is a web-based app for programmers to code up visuals in real-time as the music plays. With this app the programmer becomes a Visualizer DJ who produces stunning visuals at a party.

The app is inspired by IƱigo Quilez and his live code demos at SIGGRAPH 2012 Real-time Live event. Similar examples can be seen here and here.

Technical Overview
This app is essentially a WebGL GLSL live coder hooked up to audio spectrum analyzer. The user has access to the GLSL code that is producing the visuals, which can be compiled instantly and displayed.

Technical Details

Music Analyzer
Audio frequency analysis is done using Mozilla's Audio Data API and fast-Fourier transform functions, provided by [blank]'s signal processing library. The Audio Data API provides access to the framebuffer, which contains the decoded audio sample data. This data is fed into the fast-Fourier transform to extract  spectrum frequency data. Frequency data has 1024 channels. This is way too many for the use of this app and needs to be reduced to a manageable and meaningful number. The higher half of the frequencies are barely noticeable, so they are ignored. The lower half are kept and averaged down to 8 channels total for the user to use.

Music Access
Audio Data API requires access to the audio framebuffer data, which is sensitive secured data access. A limitation to this is the "same origin policy", which means it cannot access audio data outside the same root website. This becomes a problem since users need to be able to use their own music for this app. A solution is to host the web app on a public directory on my personal Dropbox, and have users provide their music using their own Dropbox. This is done through the Dropbox javascript API. This works because now the web app and any user provided files are all within the same root website, which is dropbox.com.

Music Streaming
The user logs in through Dropbox API and specifies which folder to contains the music files. Only .ogg and .wav files are accepted, as determined by Mozilla Firefox. Once the user has selected a folder, no music files are downloaded or transferred. Instead, public URLs of the files are generated and stored within the app for that session. These URLs expire within 4 hours. The audio is then streamed per file as they are needed, using the URL. The app does not load more than one audio file at any time.

Visuals Shader
Visuals are generated using a fullscreen quad and a fragment shader. Audio frequency levels are passed into this shader as uniforms. The GLSL code generating the visuals is exposed through a large text editor, where the code can be edited and compiled in real-time to be displayed on the screen.

Postprocessing Shader
Finally, the visuals go through a post-processing shader which applies a series of custom effects to produce dance-club-like styles. This includes over-saturation, bloom, box-blur, and film grain.

Tuesday, April 24, 2012

Some Nice Images

Just throwing up some nice rendered screenshots. 
Click to see full-size (500x400).