Thursday, December 15, 2011

Skeleton and pose character animation using AnimKit

Objective: How to implement pose and skeleton character animation in OpenGL ES 2.0?

I don't want to use gamekit with OGRE 1.8 rendering engine - need something lighter. Don't see this as "reinventing the wheel" - though there is nothing wrong with "reinventing the wheel" when it is fun, when you enjoy doing it. Hunter S. Thomson typed copies of Hemingway's books - he enjoyed doing it and he knew why he was doing it. I believe he wasn't having a feeling he would live for 1000 years - so, why not waste a bit of time... IMHO, doing this is the best/fastest way for me to learn and it is fun. Guess one could say that I could also type copy of Hunter S. Thomson articles, before continuing, to make this blog more fun to read.

Anyway, I think this is just what I need - gamekit's AnimKit. Briefly checked the code and run it on my desktop: it supports vertex (pose and morph) and skeleton animations, animation blending... inverse kinematics in the roadmap. Grepped for MATRIX_PALETTE - seems to have software (on CPU) animation implemented only. There is nice demo application AppAnimKitGL which looks quite promising. It is dependent on libGLU, libGLEW and libGLUT, and with fixed rendering pipeline. So, there is some effort needed to port it to iPhone and N9 - but I did this work in previous posts/examples so it should go faster now. Let's see. Plan to first focus on opengles 2.0.
...
Continuing:
Had some time during the weekend to check this. Here is how it looks on Nokia N9 running next to the AnimKit demo on Ubuntu:


Note that I did not port all of the demo helper features (bones, normal rendering, etc). Difference in texture color on N9 comes from using loaded BGR texture as RGB - if you plan to use the same code, you'll probably use different Blender model and different texture loader.
Code is available here: https://github.com/astojilj/astojilj_animkit/commit/df571339ab55d74681badf8c88cc468cedb3d372
To run it on Nokia N9, open Samples/AnimKitGL/AppAnimKitGl.pro with QtCreator and just build&deploy. If you intend to run original demo on desktop, note that you'll need to have Blu.blend in current directory.

Next thing I plan to do is add (copy from previous project) iPhone/iPad project support files.

...

iPhone version




Note that the fragment shader is different from N9 version - uncommented the code that is darkening the borders and doing the phong (giving the cartoon shading look).

Started from XCode, File->New Project->OpenGL ES Application (iPhone). After this, added all the sources to the project, replaced all "in project" project includes using <> with "" (e.g. #include "Mathematics.h") and added OPENGL_ES_2_0 to Preprocessor Macros item in Target Info dialog. Added also code to printout frames per second info.

This made the things compile, but when run I couldn't see anything rendered. Both use the same memory layout (little endian) so that wasn't a problem. Turned out that with 15000+ indices using glDrawElements with GL_UNSIGNED_INT indices doesn't work. GL_UNSIGNED_SHORT does.
Put a quick fix in the code with following comment:


// in iOS 4.1 simulator, glDrawElements(GL_TRIANGLES, cnt, GL_UNSIGNED_INT did not work - did not show anything on screen
// while GL_UNSIGNED_SHORT works. Using this as temporary workaround until trying with new sdk or anyway, change types
// in animkit code (no reason to use unsigned int anyway).


This got the scene rendered, and only thing left to do was to add depth buffer support (as default XCode SDK skeleton code doesn't have it on).

Code for the example running on iPhone (note that I was using iOS SDK 4.1 - yep, plan to update soon...) https://github.com/astojilj/astojilj_animkit/commit/5c11f660281bb8c8a6b5e3bba93938f703b6307c.  Few fixes added later here: https://github.com/astojilj/astojilj_animkit/commit/03545c7d1ea753c078aab4d441d4f12d91d462b8
Plan to merge it to master after verifying changes don't break anything on N9.

Next thing to try is scene with multiple actors with animated skeleton (I think I'll do cartoon animals) hardware (on GPU) matrix palette skeleton animation.

Tuesday, December 6, 2011

Collision detection for character - hit character falls down as rag-doll


Objective: how to implement collision detection for animated character? Character has bone system and it's position and movement is controlled by pose and animations but once it gets hit it needs to fall down like a rag-doll.

Get character model to ReadBlend: There is no code change needed for this part - if you have existing model with character, open project's blend file (PhysicsAnimationBakingDemo.blend) and use Blender's File->Append or Link command. Once you select blender file to "append" character model, select Objects and then selects meshes, armatures or constraints to import. I tried with this file and it worked - got the static character displayed in the demo. However, with ~300 triangles per each piece it was needed to reduce number of vertices not to let it affect frames per second count. If you hit an error "Error Not A Library" in Blender when appending, usually it is related to incompatible versions - open biped-rig blend file, save it (overwrite) and then try append from it again.  

Once I got biped_rig's character displayed next to the tree, it just didn't look nice. Decided to do my own made from boxes; it didn't take long to make head, spine, pelvis, upper and lower legs and arms, feet (and hands) from boxes used in the scene. When model is created, assuming all body pieces are marked as default "Static" (and not as "Rigid body") in Blender, character made of boxes gets displayed in the demo in standing position, not affected by gravity but with ball bouncing against it. If body parts would be marked as "Dynamic", ball would push them. So, for the character body parts, controlled by animation proper mode is Dynamic. Once the contact with bullet is detected, we will just "switch" them to be affected by gravity and they'll fall down.

Modeling joint constraints in Blender: to get "more realistic" rag-doll joint constraints, I need btConeTwistConstraint and btHingeConstraint and to set limits. Check the details for joint constraints example from Bulletphysics Ragdoll example code. I didn't set the limits in Blender 2.49, which you can tell from the demo bellow. When needed, constraints could be tweaked in C++ code (BulletBlendReaderNew::convertConstraints()). Anyway, just put joint constraints between body parts (this tutorial might help) to Blender file and they work also on phone.

Turn on the gravity for character body parts - when character is hit, it needs to fall down like a rag-doll: just after stepSimulation(), call processContactData() to check collision between ball and character:

void BulletBlendReaderNew::processContactData()
{
    //Assume world->stepSimulation or world->performDiscreteCollisionDetection has been called
    int numManifolds = m_destinationWorld->getDispatcher()->getNumManifolds();
    for (int i=0;i<numManifolds;i++)
    {
        btPersistentManifold* contactManifold =  m_destinationWorld->getDispatcher()->getManifoldByIndexInternal(i);
        btCollisionObject* obA = static_cast<btCollisionObject*>(contactManifold->getBody0());
        btCollisionObject* obB = static_cast<btCollisionObject*>(contactManifold->getBody1());


        int numContacts = contactManifold->getNumContacts();
        if (!numContacts)
            continue;


// check if one of colliding shapes is ball
        if (obA != ball && obB != ball)
            continue;


// this is simplified check if another collision shape is part of character's armature
        if (obA->getCollisionShape()->getUserPointer() == &(this->armature)
                || obB->getCollisionShape()->getUserPointer() == &(this->armature))
        {
            // convert armature -> ragdoll in case ball hits character
            turnToRagdoll(&(this->armature));
        }
    }
}


void BulletBlendReaderNew::turnToRagdoll(ArmNode *nodeList)
{
    ArmNode *next = nodeList;
    while(next) {
        if (next->piece)
            ((btRigidBody*)next->piece)->setAngularFactor(1.f);
        next = next->nextSibling;
    }
}



Note that all the body parts' meshes are marked as "Dynamic" in Blender's Logic panel (F4). Dynamic bodies have setAngularFactor set in ReadBlend set to 0.f initially in BulletBlendReaderNew::createBulletObject().

In case of complex sophisticated character meshes, it makes sense to use the same technique, and not only on constrained systems (like iPhone and N9) - don't add complete mesh to physics world for collision and gravity simulation, but create "shadow" model out of boxes only for physics and collision. Then, when rendering rigged mesh, use motionState to get location (transformation) of each box (bone in simplified armature in physics simulation world) and then apply transformation to complex rigged character's bones when rendering it. Simple physics model would not be rendered, it would just exist in physics world.

Monday, November 28, 2011

Tree collision demo - getting the texture and color to N9

I didn't use textures from demo http://opengles20study.blogspot.com/2011/01/cel-shading-toon-shading-silhouette-how.html after porting to N9: http://opengles20study.blogspot.com/2011/08/meego.html
To get the textures work if using jpeg library (if not using iPhone SDK to generate images - code under #ifdef USE_IPHONE_SDK_JPEGLIB) it is needed to change m_pixelColorComponents from GL_RGBA to GL_RGB here.
It seemed like a good idea not to texture the tree, but to supply color via uniform value when rendering the  tree per instance. I plan to check how the tree looks with very few leaves and for that ngPlant leaves group probably would be better to use textures (billboards). For example about how to use uniform color value in shader, just grep the code for u_Color - there are several available.
This is how it looks on N9 - sorry for the poor video quality.


Monday, November 7, 2011

Tree branches collision detection

I showed the example from previous post to cousin Miša and explained how I plan to add physics and collision detection for each branch and leaf of the tree; he noticed that the tree looks quite complex (several thousands vertices) and asked how much CPU that could consume, and how much it could affect performance and lower frames per second count...
In short, this is what the post, code examples and demo is about. There is another thing that could be interesting, animating camera position from behind the ball to the position orthogonal to ball direction while the ball is moving - not stopping or slowing down the simulation like in previous posts.

Have to apologize for the not so good quality of videos I made; I'll try to compensate it with detailed explanation and code examples about how to use related ngPlant and bulletphysics API.

The video above shows a heavy ball rolling down the hill to hit the tree, bounce back and hit low hanging tree branch.
Another video shows a rather heavy ball rolling down the hill, hitting the tree,... after it rolls away from the tree btRigidBody::applyCentralImpulse() is "applied" throwing the ball back to the tree. After it bounces back several times and get thrown again, eventually ball gets stuck between the branches of the tree.

This is simplified version of the code throwing the ball from the ground to the tree - impulse with angle of 45 degrees (500 horizontally and  500 vertically):

Vec3 vToTree;
MatrixVec3Normalize(vToTree, vTarget - vFrom); // vTarget is tree,
// vFrom is current ball position
((btRigidBody*)ball)->applyCentralImpulse(btVector3(0.,0.,500.) + 500. * btVector3(vToTree.x, vToTree.y, vToTree.z));


Camera animation while simulation is running - from behind the ball to orthogonal to ball rolling direction

For this, I'm using TransformInterpolator class designed in previous post about camera movement animation - start point in transform interpolation is defined as camera behind the ball:


MATRIX mTransformStart;
// take vFrom as previous position and vTarget as current, or in this case it is tree position
modelTransformWithCameraBehindTheBall(mTransformStart, vFrom, vTarget);
cameraAnimation = new TransformInterpolator(mTransformStart, mTransform, 75);


End point in camera animation (mTransform) is camera transformation to look at ball orthogonal to direction of ball movement, calculated as in this example:

Vec3 vBallDirection = vTarget - vBall;
Vec3 vViewDirection, vDirection;
// camera position on vector vViewDirection from the ball:
// it needs to be orthogonal to both ball movement direction (vBallDirection) and "up" vector z
// as in example coordinate system z means up.
MatrixVec3CrossProduct(vViewDirection, vBallDirection, Vec3(0,0,1));

MatrixVec3Normalize(vDirection, vViewDirection);
// center of the view would be between ball and the tree
Vec3 vTo = vBall + vBallDirection/ 2;

// we take some distance from "center of the view" - a point that lies on vector vViewDirection
// looking at vTo is then nice landscape view, orthogonal to "up" and ball direction.
MatrixLookAtRH(mTransform, vTo + 3.f * CameraDistanceFromBall * vDirection, vTo, Vec3(0,0,1));


It is important to update cameraAnimation endPoing after each dynamicsWorld->stepSimulation(); as the ball advances to the tree camera needs to follow - after animation is finished, camera will just follow the ball movement. Since the target is the tree, this would give a nice side view of ball hitting and bouncing of the tree, as on both videos above.

Physics (collision) for tree and all tree branches

ngPlant and bulletphysics btIndexedMesh and btTriangleMesh APIs fit together; I'm using:

void P3DHLIPlantInstance::FillVAttrBuffersI( const P3DHLIVAttrBuffers *VAttrBuffers, unsigned int GroupIndex) const

to load vertices, used to render the tree and for physics btIndexedMesh::m_vertexBase.


void P3DHLIPlantTemplate::FillIndexBuffer
(void *IndexBuffer,
unsigned int GroupIndex,
unsigned int PrimitiveType,
unsigned int ElementType,
unsigned int IndexBase) const

to load index of vertices in triangles, used to render the tree and for physics btIndexedMesh::m_triangleIndexBase

This is a code used to add mesh for a tree to btDynamicsWorld.


btVector3 scale(object->size.x,object->size.y,object->size.z);
btTransform worldTrans;
worldTrans.setIdentity();
worldTrans.setOrigin(btVector3(object->loc.x,object->loc.y,object->loc.z));
worldTrans.getBasis().setEulerZYX(object->rot.x,object->rot.y,object->rot.z);


btTriangleMesh* meshInterface = new btTriangleMesh();
// for now, modeling this way trees in blender - a mash with name OBplant_
if (!strncmp(object->id.name, "OBplant_", 6)) {

// get the plant data
PlantGraphicsObject *plant = (PlantGraphicsObject *)createGraphicsObject(object, 0);

// fill the mesh bullet interface with plant branch layer data
for (int i = 0; i < plant->plantData().branchLevels.size(); i++) { 

    btIndexedMesh mesh; 
    const PlantObjectData::BranchLevel &levelData(plant->plantData().branchLevels.at(i));
    mesh.m_numTriangles = levelData.IndexCount / 3;
    mesh.m_triangleIndexBase = (const unsigned char *)levelData.IndexBuffer;
    mesh.m_triangleIndexStride = 3 * sizeof(unsigned int);
    mesh.m_numVertices = levelData.vertexCount;
    mesh.m_vertexBase = (const unsigned char *)levelData.PosBuffer;
    mesh.m_vertexStride = sizeof(float) * 3;
    meshInterface->addIndexedMesh(mesh);
}

// create convex collision object based on mesh bullet interface
btCollisionShape* colShape = 0;
btBvhTriangleMeshShape* childShape = new btBvhTriangleMeshShape(meshInterface, true);

if (scale[0]!=1. || scale[1]!=1. || scale[2]!=1.) {
// I have scaled down the mesh in blender as it was too big
    colShape = new btScaledBvhTriangleMeshShape(childShape,scale);
} else {     

    colShape = childShape;
}

btVector3 inertia(0,0,0);
btRigidBody* colObj = new btRigidBody(0.f,0,colShape,inertia);
colObj->setWorldTransform(worldTrans);

// references ...
colObj->setCollisionShape(colShape);
plant->setCollisionObject(colObj);
colObj->setUserPointer(plant);

// add the physics object to the world
m_destinationWorld->addRigidBody(colObj);

Sunday, October 16, 2011

N9

I'm very proud I was part of N9 team for last three years. The device is out. It is beautiful piece of hardware with amazing screen.

I'll start using it as development device for this thing I'm learning here. Here is the demo from the last post, with 2 ngPlant trees, running on N9.


Next thing I'm planning to do is to add collision for all of the branches in ngPlant tree to PlayBlends' btDiscreteDynamicsWorld - let's say simulating a ball hitting a branch, then bouncing from one to another until it fells down to ground ... or micro airplane flying through treetop ... anyway, just would like to see how it performs and what are the constraints. Rendering ngPlant tree seems quite OK - uploading vertex and index data to GPU buffers for rendering:

        glBindBuffer(GL_ARRAY_BUFFER,...);
        glBufferData(GL_ARRAY_BUFFER, ... GL_STATIC_DRAW);

....
        Piper::instance()->glDrawElements(GL_TRIANGLES....

Note that ngPlant doesn't yet support exporting mash data as triangle strips. The documentation states it is planned for next versions.
For tree branches collision detection, I'll start by reading Real Time Collision Detection and check btBvhTriangleMeshShape.

Sunday, September 25, 2011

Rendering plants, trees, forest. ngPlant.

ngPlant (http://ngplant.sourceforge.net/) is an open source plant modeling software and I wanted to check how to use it and how it performs. Downloaded ngPlant source from http://sourceforge.net/projects/ngplant/files/ngplant/0.9.8/ngplant-0.9.8.tar.gz/download.

ngPlant license:
"
ngPlant is a Free Software project. ngPlant modeling tool and ngpshot utility are distributed under the terms of the GNU General Public License (GPL). Software libraries (libngpcore, libngput and pywrapper) which may be used in another projects are distributed under the terms of the BSD License.
"
Checked the code; to generate models (ngPlant documentation uses the term "instantiate") and render a group of (different) trees of the same species, I would need libngpcore and maybe libngput. "Maybe libngput" as some of the code seems already existing in oolongengine code.
BSD license (libngpcore and libngput) means that code is OK to statically link to and use in iPhone apps. GPL license for ngpview and ngpshow prohibits reusing their rendering implementation in iPhone or N9 (non GPL) applications. Anyway, checked the code and since it is fixed pipeline desktop OpenGL code using GLX and GLEW, I could not use it as is anyway.
In short, I could reuse loading from ngp files and instantiating tree mash models but needed to write rendering.

Compiling went OK with no bigger problems - there is dependency to GLX in p3dglmemcntx.cpp (and .h). Removed them one and png backend from compilation. Next was to write OpenGL ES 2.0 renderring for ngPlant and include it's physics. First, I took example elm.ngp available on Yorik's plant making tutorial. There is also a plant library available here. Thanks Yorik.

Anyway,... elm.ngp. With tens of thousands of triangles, the tree model was a bit heavy to render - so decided to simplify it a bit: removed few branches and did not even want to display leaves in this scene (term billboards used in ngPlant). Elm tree was too big comparing to the rest of the scene, so scaled it down when rendering. I've positioned the tree so that the ball would, after hitting boxes, bounce off it. Here is the result:


Saturday, August 6, 2011

Porting to Linux (MeeGo)

Video corresponds to the first version of Linux (EGL, XLib, OpenGL ES 2.0) port. N900 is used for the demo and I plan to provide comparable video on N9 later, after sales start.

Pushed the source code to branch portToMeego. It uses EGL and XLib to create window and initialize GL surface. Used Qt project files, and run the code from Qt Creator. Qt SDK 1.1.2 includes support for N9 (MeeGo Harmattan), MeeGo and N900.

There is a nice example on wiki.maemo.org demonstrating how to use EGL, XLib and OpenGL ES 2.0. To run it on MeeGo devices, you'll just need to remove N900 specific call to XChangeProperty with "_HILDON_NON_COMPOSITED_WINDOW".
Comparing to iOS's EAGLContext::presentRenderBuffer (example behind the link includes also initialization), with EGL swap renderbuffer using  eglSwapBuffers, like here.

Sunday, January 30, 2011

Cel shading, toon shading, silhouette extraction,... how to make the scene look better

So far, it was about porting ReadBlend (Bullet physics world populated from .blend file) from OpenGL ES 1.1 (fixed pipeline) to OpenGL ES 2.0 (GLSL), use example toon shader (blue-white-black color palette), and make camera slerp, lerp and follow the ball few metres behind it while ball is accelerating down the hill.
I'd like to learn how to make it look better. 

First approach would be to use texture in Toon shader.

Left side shows the original, with white phong reflection. On the right hand side is,... I think that silhouette looks nice but the texture is wrong for cartoon look. Phong doesn't look like phong anymore. Patch implementing described change to Toon.frag is here. Might help... few lines of code copied from Texture.frag.

I've simplified ReadBlend scene textures to be more cartoon-like, and modified Toon.frag to make silhouette have the same color as the area inside, just with darker tone. IMHO it looks better then blue white black scene from previous posts:




Without geometry shaders or expensive analysis of faces on CPU side, I still need to work on finding proper technique. There are other things that I need to figure out first, so ... coming back to this later.
Geometry shaders are not supported on iOS (or other OpenGL ES 2.0 implementations). I understand that hardware supports it (PowerVR SGX and SGXMP) but the feature is available only through DirectX Shader Model. Anyway, for other techniques on OpenGL ES, or for those who don't plan to use OpenGL ES (...can use geometry shaders) and are visiting this page, recommend to read The Little Grasshopper post about silhouette extraction.

Update on 27.02.2012 - found nice example in PowerVR documentation (document titled Edge Detection Training Course), based on edge detection postprocessing - render scene in first pass, detect and draw edges on GPU in second pass. I plan to give it a try soon. Note that edge detection is not slowish color comparison to surrounding texels; instead it packs objectId to alpha layer and postprocessing in second pass compares only with 2 surrounding texels. On iPhone 3GS this runs 60fps.


Applied this (using alpha channel to hold info about object) and got nice results (video presents edge size changing 1 -> 2 -> 3 pixels wide) 




I plan to experiment using normals instead of object ids, to get internal edges too, not only silhouettes, and post full code then.



Saturday, January 22, 2011

Camera movement animation, slerp and lerp interpolation, moving camera through 3D world

Code for the example is in this commit.

Here, I wanted to make camera in example from the first post (where camera is static always displaying the scene from the same position) to move; initially to rotate around the scene from initial position to position right behind the ball. Once it is positioned behind the ball, start the Physics scene step simulation - ball would start rolling towards the target and camera would follow the ball down the hill.

Check the video to see how it looks - I'll prepare the video and then explain the details.


For this example, key points (in 3d space) that define camera trajectory and orientation are:
1) initial camera position as defined in .blend file is looking at the scene from left side
2) just behind the ball, looking at the direction of target
3) target (I just picked one of the cubes on the scene - MECube.013).

Camera position animation from 1 - 2 is done with
if (cameraAnimation)
cameraAnimation->step();
 where cameraAnimation is instance of TransformInterpolator; Interpolating - animating values from one matrix to another using slerp for rotation and lerp for translation. TransformInterpolator combines MATRIX lerp and slerp, provides convenience step() and finished() to model animation interpolation.

To get the slerp working - there are 2 slerp implementations is oolongengine, I'm using the one from Math/Matrix.cpp. Could have picked also BulletPhysics but I did not want to convert MATRIX -> btTransform -> bqQuaternion -> do the slerp (then back from quaternion to MATRIX) -> btTransform -> float[]. Reason I'm planning to use Math/Matrix.cpp code is implementation of Neon arm7 matrix multiply. Quite possible that I could be wrong, that there is no difference in performances between MATRIX's and btTransform's matrix multiplication...

First problem I faced, and it took me some time to debug - refreshing meanwhile knowledge about matrix scaling, velocity and determinant - was that converting MATRIX from MatrixLookAtLH to quaternion and back to matrix resulted with scaled matrix. Fix was to normalize vectors after multiplying in MatrixLookAt - not before multiplication. After this patch, it started to work. I added also matrix to quaternion here.

Following the ball
Once the camera reach position 2, cameraAnimation->finished() returns true and we start stepping btDiscreteDynamicsWorld, the ball goes down the hill and camera is following it, looking toward the target.
Figured out that, if I would like to slow down btDiscreteDynamicsWorld simulation and make camera fly around the world in slow motion, just need to supply value for sceneSlowdownWhileAnimatingCamera lower then one, e.g. 0.5, to btDiscreteDynamicsWorld->(sceneSlowdownWhileAnimatingCamera * 1./60.). 
Anyway, let's see first how to get that toon shader and scene look better.