Monday, November 7, 2011

Tree branches collision detection

I showed the example from previous post to cousin Miša and explained how I plan to add physics and collision detection for each branch and leaf of the tree; he noticed that the tree looks quite complex (several thousands vertices) and asked how much CPU that could consume, and how much it could affect performance and lower frames per second count...
In short, this is what the post, code examples and demo is about. There is another thing that could be interesting, animating camera position from behind the ball to the position orthogonal to ball direction while the ball is moving - not stopping or slowing down the simulation like in previous posts.

Have to apologize for the not so good quality of videos I made; I'll try to compensate it with detailed explanation and code examples about how to use related ngPlant and bulletphysics API.

The video above shows a heavy ball rolling down the hill to hit the tree, bounce back and hit low hanging tree branch.
Another video shows a rather heavy ball rolling down the hill, hitting the tree,... after it rolls away from the tree btRigidBody::applyCentralImpulse() is "applied" throwing the ball back to the tree. After it bounces back several times and get thrown again, eventually ball gets stuck between the branches of the tree.

This is simplified version of the code throwing the ball from the ground to the tree - impulse with angle of 45 degrees (500 horizontally and  500 vertically):

Vec3 vToTree;
MatrixVec3Normalize(vToTree, vTarget - vFrom); // vTarget is tree,
// vFrom is current ball position
((btRigidBody*)ball)->applyCentralImpulse(btVector3(0.,0.,500.) + 500. * btVector3(vToTree.x, vToTree.y, vToTree.z));


Camera animation while simulation is running - from behind the ball to orthogonal to ball rolling direction

For this, I'm using TransformInterpolator class designed in previous post about camera movement animation - start point in transform interpolation is defined as camera behind the ball:


MATRIX mTransformStart;
// take vFrom as previous position and vTarget as current, or in this case it is tree position
modelTransformWithCameraBehindTheBall(mTransformStart, vFrom, vTarget);
cameraAnimation = new TransformInterpolator(mTransformStart, mTransform, 75);


End point in camera animation (mTransform) is camera transformation to look at ball orthogonal to direction of ball movement, calculated as in this example:

Vec3 vBallDirection = vTarget - vBall;
Vec3 vViewDirection, vDirection;
// camera position on vector vViewDirection from the ball:
// it needs to be orthogonal to both ball movement direction (vBallDirection) and "up" vector z
// as in example coordinate system z means up.
MatrixVec3CrossProduct(vViewDirection, vBallDirection, Vec3(0,0,1));

MatrixVec3Normalize(vDirection, vViewDirection);
// center of the view would be between ball and the tree
Vec3 vTo = vBall + vBallDirection/ 2;

// we take some distance from "center of the view" - a point that lies on vector vViewDirection
// looking at vTo is then nice landscape view, orthogonal to "up" and ball direction.
MatrixLookAtRH(mTransform, vTo + 3.f * CameraDistanceFromBall * vDirection, vTo, Vec3(0,0,1));


It is important to update cameraAnimation endPoing after each dynamicsWorld->stepSimulation(); as the ball advances to the tree camera needs to follow - after animation is finished, camera will just follow the ball movement. Since the target is the tree, this would give a nice side view of ball hitting and bouncing of the tree, as on both videos above.

Physics (collision) for tree and all tree branches

ngPlant and bulletphysics btIndexedMesh and btTriangleMesh APIs fit together; I'm using:

void P3DHLIPlantInstance::FillVAttrBuffersI( const P3DHLIVAttrBuffers *VAttrBuffers, unsigned int GroupIndex) const

to load vertices, used to render the tree and for physics btIndexedMesh::m_vertexBase.


void P3DHLIPlantTemplate::FillIndexBuffer
(void *IndexBuffer,
unsigned int GroupIndex,
unsigned int PrimitiveType,
unsigned int ElementType,
unsigned int IndexBase) const

to load index of vertices in triangles, used to render the tree and for physics btIndexedMesh::m_triangleIndexBase

This is a code used to add mesh for a tree to btDynamicsWorld.


btVector3 scale(object->size.x,object->size.y,object->size.z);
btTransform worldTrans;
worldTrans.setIdentity();
worldTrans.setOrigin(btVector3(object->loc.x,object->loc.y,object->loc.z));
worldTrans.getBasis().setEulerZYX(object->rot.x,object->rot.y,object->rot.z);


btTriangleMesh* meshInterface = new btTriangleMesh();
// for now, modeling this way trees in blender - a mash with name OBplant_
if (!strncmp(object->id.name, "OBplant_", 6)) {

// get the plant data
PlantGraphicsObject *plant = (PlantGraphicsObject *)createGraphicsObject(object, 0);

// fill the mesh bullet interface with plant branch layer data
for (int i = 0; i < plant->plantData().branchLevels.size(); i++) { 

    btIndexedMesh mesh; 
    const PlantObjectData::BranchLevel &levelData(plant->plantData().branchLevels.at(i));
    mesh.m_numTriangles = levelData.IndexCount / 3;
    mesh.m_triangleIndexBase = (const unsigned char *)levelData.IndexBuffer;
    mesh.m_triangleIndexStride = 3 * sizeof(unsigned int);
    mesh.m_numVertices = levelData.vertexCount;
    mesh.m_vertexBase = (const unsigned char *)levelData.PosBuffer;
    mesh.m_vertexStride = sizeof(float) * 3;
    meshInterface->addIndexedMesh(mesh);
}

// create convex collision object based on mesh bullet interface
btCollisionShape* colShape = 0;
btBvhTriangleMeshShape* childShape = new btBvhTriangleMeshShape(meshInterface, true);

if (scale[0]!=1. || scale[1]!=1. || scale[2]!=1.) {
// I have scaled down the mesh in blender as it was too big
    colShape = new btScaledBvhTriangleMeshShape(childShape,scale);
} else {     

    colShape = childShape;
}

btVector3 inertia(0,0,0);
btRigidBody* colObj = new btRigidBody(0.f,0,colShape,inertia);
colObj->setWorldTransform(worldTrans);

// references ...
colObj->setCollisionShape(colShape);
plant->setCollisionObject(colObj);
colObj->setUserPointer(plant);

// add the physics object to the world
m_destinationWorld->addRigidBody(colObj);

Sunday, October 16, 2011

N9

I'm very proud I was part of N9 team for last three years. The device is out. It is beautiful piece of hardware with amazing screen.

I'll start using it as development device for this thing I'm learning here. Here is the demo from the last post, with 2 ngPlant trees, running on N9.


Next thing I'm planning to do is to add collision for all of the branches in ngPlant tree to PlayBlends' btDiscreteDynamicsWorld - let's say simulating a ball hitting a branch, then bouncing from one to another until it fells down to ground ... or micro airplane flying through treetop ... anyway, just would like to see how it performs and what are the constraints. Rendering ngPlant tree seems quite OK - uploading vertex and index data to GPU buffers for rendering:

        glBindBuffer(GL_ARRAY_BUFFER,...);
        glBufferData(GL_ARRAY_BUFFER, ... GL_STATIC_DRAW);

....
        Piper::instance()->glDrawElements(GL_TRIANGLES....

Note that ngPlant doesn't yet support exporting mash data as triangle strips. The documentation states it is planned for next versions.
For tree branches collision detection, I'll start by reading Real Time Collision Detection and check btBvhTriangleMeshShape.

Sunday, September 25, 2011

Rendering plants, trees, forest. ngPlant.

ngPlant (http://ngplant.sourceforge.net/) is an open source plant modeling software and I wanted to check how to use it and how it performs. Downloaded ngPlant source from http://sourceforge.net/projects/ngplant/files/ngplant/0.9.8/ngplant-0.9.8.tar.gz/download.

ngPlant license:
"
ngPlant is a Free Software project. ngPlant modeling tool and ngpshot utility are distributed under the terms of the GNU General Public License (GPL). Software libraries (libngpcore, libngput and pywrapper) which may be used in another projects are distributed under the terms of the BSD License.
"
Checked the code; to generate models (ngPlant documentation uses the term "instantiate") and render a group of (different) trees of the same species, I would need libngpcore and maybe libngput. "Maybe libngput" as some of the code seems already existing in oolongengine code.
BSD license (libngpcore and libngput) means that code is OK to statically link to and use in iPhone apps. GPL license for ngpview and ngpshow prohibits reusing their rendering implementation in iPhone or N9 (non GPL) applications. Anyway, checked the code and since it is fixed pipeline desktop OpenGL code using GLX and GLEW, I could not use it as is anyway.
In short, I could reuse loading from ngp files and instantiating tree mash models but needed to write rendering.

Compiling went OK with no bigger problems - there is dependency to GLX in p3dglmemcntx.cpp (and .h). Removed them one and png backend from compilation. Next was to write OpenGL ES 2.0 renderring for ngPlant and include it's physics. First, I took example elm.ngp available on Yorik's plant making tutorial. There is also a plant library available here. Thanks Yorik.

Anyway,... elm.ngp. With tens of thousands of triangles, the tree model was a bit heavy to render - so decided to simplify it a bit: removed few branches and did not even want to display leaves in this scene (term billboards used in ngPlant). Elm tree was too big comparing to the rest of the scene, so scaled it down when rendering. I've positioned the tree so that the ball would, after hitting boxes, bounce off it. Here is the result:


Saturday, August 6, 2011

Porting to Linux (MeeGo)

Video corresponds to the first version of Linux (EGL, XLib, OpenGL ES 2.0) port. N900 is used for the demo and I plan to provide comparable video on N9 later, after sales start.

Pushed the source code to branch portToMeego. It uses EGL and XLib to create window and initialize GL surface. Used Qt project files, and run the code from Qt Creator. Qt SDK 1.1.2 includes support for N9 (MeeGo Harmattan), MeeGo and N900.

There is a nice example on wiki.maemo.org demonstrating how to use EGL, XLib and OpenGL ES 2.0. To run it on MeeGo devices, you'll just need to remove N900 specific call to XChangeProperty with "_HILDON_NON_COMPOSITED_WINDOW".
Comparing to iOS's EAGLContext::presentRenderBuffer (example behind the link includes also initialization), with EGL swap renderbuffer using  eglSwapBuffers, like here.

Sunday, January 30, 2011

Cel shading, toon shading, silhouette extraction,... how to make the scene look better

So far, it was about porting ReadBlend (Bullet physics world populated from .blend file) from OpenGL ES 1.1 (fixed pipeline) to OpenGL ES 2.0 (GLSL), use example toon shader (blue-white-black color palette), and make camera slerp, lerp and follow the ball few metres behind it while ball is accelerating down the hill.
I'd like to learn how to make it look better. 

First approach would be to use texture in Toon shader.

Left side shows the original, with white phong reflection. On the right hand side is,... I think that silhouette looks nice but the texture is wrong for cartoon look. Phong doesn't look like phong anymore. Patch implementing described change to Toon.frag is here. Might help... few lines of code copied from Texture.frag.

I've simplified ReadBlend scene textures to be more cartoon-like, and modified Toon.frag to make silhouette have the same color as the area inside, just with darker tone. IMHO it looks better then blue white black scene from previous posts:




Without geometry shaders or expensive analysis of faces on CPU side, I still need to work on finding proper technique. There are other things that I need to figure out first, so ... coming back to this later.
Geometry shaders are not supported on iOS (or other OpenGL ES 2.0 implementations). I understand that hardware supports it (PowerVR SGX and SGXMP) but the feature is available only through DirectX Shader Model. Anyway, for other techniques on OpenGL ES, or for those who don't plan to use OpenGL ES (...can use geometry shaders) and are visiting this page, recommend to read The Little Grasshopper post about silhouette extraction.

Update on 27.02.2012 - found nice example in PowerVR documentation (document titled Edge Detection Training Course), based on edge detection postprocessing - render scene in first pass, detect and draw edges on GPU in second pass. I plan to give it a try soon. Note that edge detection is not slowish color comparison to surrounding texels; instead it packs objectId to alpha layer and postprocessing in second pass compares only with 2 surrounding texels. On iPhone 3GS this runs 60fps.


Applied this (using alpha channel to hold info about object) and got nice results (video presents edge size changing 1 -> 2 -> 3 pixels wide) 




I plan to experiment using normals instead of object ids, to get internal edges too, not only silhouettes, and post full code then.



Saturday, January 22, 2011

Camera movement animation, slerp and lerp interpolation, moving camera through 3D world

Code for the example is in this commit.

Here, I wanted to make camera in example from the first post (where camera is static always displaying the scene from the same position) to move; initially to rotate around the scene from initial position to position right behind the ball. Once it is positioned behind the ball, start the Physics scene step simulation - ball would start rolling towards the target and camera would follow the ball down the hill.

Check the video to see how it looks - I'll prepare the video and then explain the details.


For this example, key points (in 3d space) that define camera trajectory and orientation are:
1) initial camera position as defined in .blend file is looking at the scene from left side
2) just behind the ball, looking at the direction of target
3) target (I just picked one of the cubes on the scene - MECube.013).

Camera position animation from 1 - 2 is done with
if (cameraAnimation)
cameraAnimation->step();
 where cameraAnimation is instance of TransformInterpolator; Interpolating - animating values from one matrix to another using slerp for rotation and lerp for translation. TransformInterpolator combines MATRIX lerp and slerp, provides convenience step() and finished() to model animation interpolation.

To get the slerp working - there are 2 slerp implementations is oolongengine, I'm using the one from Math/Matrix.cpp. Could have picked also BulletPhysics but I did not want to convert MATRIX -> btTransform -> bqQuaternion -> do the slerp (then back from quaternion to MATRIX) -> btTransform -> float[]. Reason I'm planning to use Math/Matrix.cpp code is implementation of Neon arm7 matrix multiply. Quite possible that I could be wrong, that there is no difference in performances between MATRIX's and btTransform's matrix multiplication...

First problem I faced, and it took me some time to debug - refreshing meanwhile knowledge about matrix scaling, velocity and determinant - was that converting MATRIX from MatrixLookAtLH to quaternion and back to matrix resulted with scaled matrix. Fix was to normalize vectors after multiplying in MatrixLookAt - not before multiplication. After this patch, it started to work. I added also matrix to quaternion here.

Following the ball
Once the camera reach position 2, cameraAnimation->finished() returns true and we start stepping btDiscreteDynamicsWorld, the ball goes down the hill and camera is following it, looking toward the target.
Figured out that, if I would like to slow down btDiscreteDynamicsWorld simulation and make camera fly around the world in slow motion, just need to supply value for sceneSlowdownWhileAnimatingCamera lower then one, e.g. 0.5, to btDiscreteDynamicsWorld->(sceneSlowdownWhileAnimatingCamera * 1./60.). 
Anyway, let's see first how to get that toon shader and scene look better.

Sunday, December 26, 2010

Rendering Blender model in OpenGL ES 2.0 - porting OpenGL ES 1.1 (fixed) to 2.0 (programmable GLSL) pipeline

The code I was able to find in SIO2 and oolongengine (blender reader copied from gamekit) was rendering Blender models using fixed pipeline. I needed to get cartoonlike rendering, so decided to go with OpenGL ES 2.0 and GLSL.
Oolongengine (3Dlabs Inc) includes this OpenGL ES 2.0 example - rendering rotating teapot:
  

The shader on video is simple toon shader. Decided to use it when implementing this example.

Couldn't find an OpenGL ES 2.0 example that would render 3d model loaded from .blend or .pod file - found only examples with OpenGL ES 1.1 code. One of those OpenGL ES 1.1 examples, ReadBlend from oolongengine looks like this:


I'll try to explain the approach to combine them:



Code is available here.

Class Piper, with PiperGL11 and PiperGL20 subclasses is a simple work in progress, intended to abstract differences in fixed and rendering code to one place. Now, it includes push&pop&set&multMatrix operations.
e.g. in the code, it was just required to replace ES 1.1 call glMultMatrix() with call to Piper::instance()->multMatrix() and that would bring support for both OpenGL ES 1.1 and OpenGL ES 2.0.
Decided not to use pattern to first select mode with glMatrixMode() and then do matrix operation, as it looked cleaner to supply matrix mode for each operation. Of course, in PiperGL11 avoid redundant setting the same mode with setMatrixMode.

For OpenGL ES 2.0 version, it is needed to set the values for uniforms just before the call to glDrawElements. I guess it is important to optimize this, and handle the case when there were no change in uniforms between consecutive calls to Piper::instance()->glDrawElements. Then, setupChangedVariables should be able to handle this.

Plan to work on touch input and navigating through scene, maybe with camera behind the ball, check different versions of toon shaders I could find online,... figure out how to add some more color to scene - with color attribute array or textures and check if there is difference in performances...