Sunday, January 30, 2011

Cel shading, toon shading, silhouette extraction,... how to make the scene look better

So far, it was about porting ReadBlend (Bullet physics world populated from .blend file) from OpenGL ES 1.1 (fixed pipeline) to OpenGL ES 2.0 (GLSL), use example toon shader (blue-white-black color palette), and make camera slerp, lerp and follow the ball few metres behind it while ball is accelerating down the hill.
I'd like to learn how to make it look better. 

First approach would be to use texture in Toon shader.

Left side shows the original, with white phong reflection. On the right hand side is,... I think that silhouette looks nice but the texture is wrong for cartoon look. Phong doesn't look like phong anymore. Patch implementing described change to Toon.frag is here. Might help... few lines of code copied from Texture.frag.

I've simplified ReadBlend scene textures to be more cartoon-like, and modified Toon.frag to make silhouette have the same color as the area inside, just with darker tone. IMHO it looks better then blue white black scene from previous posts:




Without geometry shaders or expensive analysis of faces on CPU side, I still need to work on finding proper technique. There are other things that I need to figure out first, so ... coming back to this later.
Geometry shaders are not supported on iOS (or other OpenGL ES 2.0 implementations). I understand that hardware supports it (PowerVR SGX and SGXMP) but the feature is available only through DirectX Shader Model. Anyway, for other techniques on OpenGL ES, or for those who don't plan to use OpenGL ES (...can use geometry shaders) and are visiting this page, recommend to read The Little Grasshopper post about silhouette extraction.

Update on 27.02.2012 - found nice example in PowerVR documentation (document titled Edge Detection Training Course), based on edge detection postprocessing - render scene in first pass, detect and draw edges on GPU in second pass. I plan to give it a try soon. Note that edge detection is not slowish color comparison to surrounding texels; instead it packs objectId to alpha layer and postprocessing in second pass compares only with 2 surrounding texels. On iPhone 3GS this runs 60fps.


Applied this (using alpha channel to hold info about object) and got nice results (video presents edge size changing 1 -> 2 -> 3 pixels wide) 




I plan to experiment using normals instead of object ids, to get internal edges too, not only silhouettes, and post full code then.



No comments:

Post a Comment