Sunday, May 13, 2012

Auto rigging, Motion capture import to Blender, Maya->Blender (WAS Blending morphing with skeleton (on GPU) animations. )

After some time, continuing with fixing few remaining issues in GPU skinning for AnimKit. I'd like to prepare another example, with facial animation and normal map skinning. 

AnimKit's AppAnimKitGL blends animations on CPU side - the code here does it on iPhone 3GS with morphing animation (head) applied first on CPU, then buffers updated via glBufferData(... GL_STREAM_DRAW) and finally skeleton animation applied in vertex shader.

AnimKit's AppAnimKitGL animation blending example

The character is ~2700 vertices it has 18 bones and morphing operates on almost all of the vertices. Though only face vertices get changed, it is a good example to start from. Original "all on CPU" brings ~15 frames per second and "morph on CPU, apply bones on GPU" runs at ~60 fps. For the facial animation I plan to split morphing only to face submesh, then apply bones to all...

After some time, this become an exercise with different tools, more than handling the animation in C++ code. I saw several questions about feasible way to use artwork from Maya and 3DS MAX in Blender, so I'm going to explain the approach I took. I tried with other formats Collada, obj, but following gave the best results:

Converting .mb to .blend and applying textures

First, started looking for a free 3D model I could use for the example; took this one from It is only available as Maya binary (file name ingame.mb)  and had no rig. UV textures are available, but not visible in Maya.
Installed fbx exporter plugin to Maya, opened the file and exported to fbx. Then, imported the .fbx file to 3DS Max and exported to (Extensible 3d) .wrl file. This one, when imported to Blender (using 2.57) showed perfectly, 5 meshes (body, bag, head and 2 eyes) and just needed to get scaled.
For each of the meshes: body, bag, head and 2 eyes, opened Edit mode and used UV unwrap to get the UV texture. Replaced original textures with ones available with .jpeg available with the model. Each of the mesh UV map position was offset and of different size compared to the texture - used Blender UV editor to map unwrapped mesh vertices to texture. Finally, after doing this for all the meshes, model showed properly in Blender:

Mixamo auto rigging and using Motion Capture .bvh file

Google showed me a log of Mixamo advertisements recently and I decided to try it. Exported model as fbx and uploaded it. Notice that the model doesn't have standard T-pose - hands are rotated and close to body. Auto rigger asked me to set position for chin, wrists, elbows and knees and ... In a minute I could see perfectly rigged model in Mixamo web viewer (Unity plugin) with all fingers bones (total ~60 bones) in different positions. Exported the Motion capture file (.bvh) from Mixamo and imported it to Blender model. I could see animated skeleton, same size as original model.
There is a bit of a work to apply skeleton to the mesh, but do it several times and it starts to be fast (about a minute seconds for different motion captures):
Select skeleton, enable X-ray so that it is visible through the mesh, go to Edit mode. Skeleton gets displayed in same pose as the mesh just needs to be translated and rotated to fit the mesh. Once you do it, leave skeleton edit mode (that would cause that skeleton appear somewhere else, but no need to worry). Select all soldier meshes (body, bag, head and 2 eyes) and create a group. This is important to do as skeleton would uniformly deform meshes that are part of the same group - otherwise you would see a bag and eyes moving away from the body.
Having all group members selected in object mode, select also skeleton and use command Object/Parent/Set, and then select armature with automatic weights. You might see that for parts of the original meshes, rig is not properly applied - in that case editing bones enveloper or extruding new bones to cover the are would solve the problem.

In AnimKit on iPhone

Finally, I saved the file, added it to AppAnimKitGL example and modified code here to use it. 23000 vertices in mesh, ~60 bones and on iPhone 3GS showed in ~45 frames per second.

Before continuing with morphing and blending, I will need to check feasibility of using .blend file for in game artwork - it is fast for prototyping, but startup time and memory footprint are not that good; should be much better if using PowerVR tooling - for textures only or also for meshes, rigs and animations. Let's see. Found few warning on forums, related to problems with animations when using PowerVR exporters from Blender. Additionally, I spent few learning 3DS Max and character modeling and enjoy some of the features.

Sunday, March 4, 2012

Matrix (palette) skinning for OpenGL ES 2.0 (GLSL, GPU, hardware) - applying bone transformation to mesh in vertex shader

When starting this post, the intention is to explain how I implemented GPU skinning in previous post, sort of a tutorial on how to port CPU skinning based code (often the approach in desktop code) to OpenGL ES.

1) Use buffers for vertices, normals (m_posnoVertexVboIds) and vertex indices (m_staticIndexVboIds). 
Here, also used for vertex colors (m_staticVertexVboIds). Call this once in init context - e.g. in UIViewController::awakeFromNib implementation - called when application is brought to foreground.

glBindBuffer(GL_ARRAY_BUFFER, m_posnoVertexVboIds[i]);
glBufferData(GL_ARRAY_BUFFER, nv*posnodatas, posnodata, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, m_staticVertexVboIds[i]);
glBufferData(GL_ARRAY_BUFFER, nv*staticdatas, staticdata, GL_STATIC_DRAW);

glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_staticIndexVboIds[i]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, ni*(idatas/2), indexData2UShort, GL_STATIC_DRAW);

No need to read this paragraph further, if you are familiar with stride and glBufferData. Note that n-th value in indexData2UShort, defines index in posnodata, staticdata related to n-th vertex. e.g. when later "drawing" 56th vertex,  if indexData2UShort[55] = 13, means that vertex and normals for 56th vertex are defined in posnodata[13] and color is defined in staticdata[13]. This is quite simplified explanation but 55 is used for 56th element, since counting starts from 0. 

2) Stuff bone index and bone weight data to buffers. 
Use the same approach as 1) for bone weight and bone indices data. This means that you need to prepare data blob packed like:

boneIndexData:       <vertex1_bone1_index><vertex1_bone2_index><vertex1_bone3_index><vertex1_bone4_index><vertex2_bone1_index><vertex2_bone2_index> ….  

boneWeightsData:  <vertex1_bone1_weight><vertex1_bone2_weight><vertex1_bone3_weight><vertex1_bone4_weight><vertex2_bone1_weight><vertex2_bone2_weight> 

            if (isMeshDeformedBySkeleton()) {
                void *boneIndexData = sub->getBoneIndexDataPtr();
                void *boneWeightsData = sub->getBoneWeightsDataPtr();
                glBindBuffer(GL_ARRAY_BUFFER_ARB, m_boneIndexVboIds[i]);
                glBufferData(GL_ARRAY_BUFFER_ARB, nv*4*sizeof(UTuint8), boneIndexData, GL_STATIC_DRAW);
                glBindBuffer(GL_ARRAY_BUFFER_ARB, m_boneWeightVboIds[i]);
                glBufferData(GL_ARRAY_BUFFER_ARB, nv*4*sizeof(float), boneWeightsData, GL_STATIC_DRAW);

In example bellow, <vertexA_boneB_index> is 1 byte (unsigned char), <vertexA_boneB_weight> is 4 byte (float), though 1 or 2 bytes should suffice too.
<vertexA_boneB_index> defines which bone index affects position of vertex A, <vertexA_boneB_weight> defines how much it affects. Note that every vertex position can be affected by up to 4 bones. If <vertexA_boneB_weight> == 0, means that bone number <vertexA_boneB_index> doesn't affect vertex A.
As explained in 1), index in m_staticIndexVboIds maps also to index of data related to vertex in boneIndexData and boneWeightsData arrays. e.g. bytes boneIndexData[13*4],boneIndexData[13*4+1],boneIndexData[13*4+2] and boneIndexData[13*4+3] defines index of bones 1-4 affecting position of 56th vertex (from example in step 1), while 4 byte floats on byte positions boneWeightsData[13*4*4 … 13*4*4+3],…, boneWeightsData[13*4*4+3*4 … 13*4*4+3*4+3] define "weight" of corresponding bones.

Vertex shader "receives" this data in:

attribute mediump vec4 a_BoneIndices;
attribute mediump vec4 a_BoneWeights;

Data in step 2), like data in step 1), needs to be fed only in init().

3) update matrix palette with simulation progress - prepare matrix palette by stepping simulation 
    // fill bone transformation to m_matrixPalette structure
4) when rendering every frame, upload updated matrix palette to shader:

    glUniformMatrix4fv(PiperGL20::instance()->currentShader()->BoneMatricesHandle, m_matrixPalette.size(), GL_FALSE, (float*)matrixData);

    BoneMatricesHandle = glGetUniformLocation(uiProgramObject, "u_BoneMatrices[0]");

and u_BoneMatrices is defined in the vertex shader to support 8 bones (in this example), as:

uniform highp   mat4    u_BoneMatrices[8];

5) Draw (glDrawElements) - use prepared vertex buffers from step 2) when drawing.

        glBindBuffer(GL_ARRAY_BUFFER, m_boneWeightVboIds[j]);
                        glVertexAttribPointer(GL_BONEWEIGHT_ARRAY, 4, GL_FLOAT, GL_FALSE, weightsbuf->stride, (GLvoid*) weightsbuf->getOffset());
        glBindBuffer(GL_ARRAY_BUFFER, m_boneIndexVboIds[j]);
                        glVertexAttribPointer(GL_BONEINDEX_ARRAY, 4, GL_UNSIGNED_BYTE, GL_FALSE, indicesbuf->stride, (GLvoid*) indicesbuf->getOffset());


if (!useGPUSkinning && PiperGL20::instance()) {
                    // since we use the same shader, turing off matrix skinning this way
                    glUniform1i(PiperGL20::instance()->currentShader()->BoneCountHandle, 0);
                glBindBuffer(GL_ELEMENT_ARRAY_BUFFER_ARB, m_staticIndexVboIds[j]);
                Piper::instance()->glDrawElements(GL_TRIANGLES, tot, GL_UNSIGNED_SHORT, (GLvoid*)idxbuf->getOffset());

I believe that there is a constraint on number of bones in matrix, but don't know what the number is for iPhone 3GS. If you need more bones then supported by the platform, you could try to split the mesh to sub meshes (and subskeletons) and render submeshes separately.

6) vertex shader does the rest - for every vertex apply bones transformation matrix.


    if (u_BoneCount > 0) {
        highp mat4 mat = u_BoneMatrices[indexOfBone.x] * weightOfBone.x;
        for (int i = 1; i < u_BoneCount; i++) {
            // rotate to use indexOfBone.x in the loop
            indexOfBone = indexOfBone.yzwx;
            weightOfBone = weightOfBone.yzwx;
            if (weightOfBone.x > 0.0) {
                mat = mat + (u_BoneMatrices[indexOfBone.x] * weightOfBone.x);
        // resulting position after applying skinning
        vertex = mat * a_Vertex;
        normal = mat * a_Normal;

Complete source code is available here.

Saturday, February 18, 2012

Performances GPU vs CPU matrix palette skinning (WAS: Tutorial #2: Part 1: Animated character walking around. Walk cycle and animated clowds)

I planned, for a while, to do another GLES 20 demo for AnimKit with multiple characters on the scene. Found some time yesterday and this is the start (video bellow). Probably I'll cover this in multiple posts; posting the results here when I find some time to progress with it.

Note that the scene is just a work in progress, clouds behind are big panes causing some performance impact but I think a good illustration of what can be done in few hours and I'll use it as input for later performance optimization work on "AnimKit on OpenGL ES 2.0". I'll post code and more text soon, my one year old is about to wake up.

The first work in progress version of code is here: commit 3e7cfe2b27 WIP: Tutorial #2: Part 1: Animated character walking around.

The code could be used as illustration and for benchmarking, I guess if you have a simple scene with less then 4000 matrix skinned vertices it should also be usable, but for this example it is not OK - downloaded SDK for iOS 5.0 and tried it on iPhone 3GS. Got only 7 frames per second (~8000 skinned vertices).
Tried more complex model with 5 animated instances and got 1-2 frames per second (40000 skinned vertices). The model from post Skeleton and pose character animation using AnimKit runs at 14 frames per second.

As assumed in previous posts, CPU matrix skinning (akGeometryDeformer::LBSkinning()) that enumerates through all vertices and normals is the bottleneck - most of the time spent there.

On the other side, tried PoverWR example of GPU matrix skinning POWERVR Insider MBX Chameleon Man Demo and it runs quite smoothly,... but it has only ~1000 skinned vertices in the mesh.

Guess poly-reducing the mesh and moving the calculation to another thread would help, though not  significantly (based on the results above) so I plan to do that (reduce complexity of scene) but also check Chameleon Man source code. Code is available as part of PowerVR Insider SDK and you would just need to register to get it.

Update on March 2nd after implementing GPU skinning: didn't spend time on reducing the scene - just after implemented GPU GLSL matrix skinning the results already look promising (on iPhone 3GS):

Scene Skinned triangles FPS with CPU skinning FPS with GPU skinning
gles2farm - 1 animated character 5620 8 59
AppAnimKitGL - 5 animated characters 33700 1 22

Code is here.

Five animated characters scene looks like this:

In following post, I will try to explain how to implement matrix palette skinning (character skeleton animation using GPU).

CPU skinning is not the best choice for Opengl ES 2.0 devices.

CPU skinning is implemented like this: for every repaint, enumerate through all vertices, calculate and apply bone transformation. In more details, code bellow is an example of CPU skinning in AnimKit's  akGeometryDeformer::LBSkinningUniformScale. Apparently, on iPhone these matrix operations affect performance significantly and are better suited for vertex shader.

const btAlignedObjectArray<akMatrix4>& matrices = *mpalette;
for(unsigned int i=0; i<vtxCount; i++)
akMatrix4 mat(matrices[indices[0]] * weights[0]);
if (weights[1]) mat += matrices[indices[1]] * weights[1];
if (weights[2]) mat += matrices[indices[2]] * weights[2];
if (weights[3]) mat += matrices[indices[3]] * weights[3];
// position
*vtxDst = (mat * akVector4(*vtxSrc, 1.f)).getXYZ();
// normal
*normDst = (mat * akVector4(*normSrc, 0.0f)).getXYZ();
*normDst = normalize(*normDst);
akAdvancePointer(normSrc, normSrcStride);
akAdvancePointer(normDst, normDstStride);
akAdvancePointer(weights, weightsStride);
akAdvancePointer(indices, indicesStride);
akAdvancePointer(vtxSrc, vtxSrcStride);
akAdvancePointer(vtxDst, vtxDstStride);

Anyway, when bone transformation is applied and mesh vertices and normals updated, repaint is called after updating vertex buffer updating vertex buffer:

akSubMesh* sub = m_mesh->getSubMesh(i);
UTuint32 nv = sub->getVertexCount();
void *codata = sub->getSecondPosNoDataPtr();
UTuint32 datas = sub->getPosNoDataStride();
glBindBuffer(GL_ARRAY_BUFFER_ARB, m_posnoVertexVboIds[i]);
glBufferSubData(GL_ARRAY_BUFFER_ARB, 0, nv*datas, codata);

This part also would get fixed by GPU skinning, since vertices only needs to be "uploaded" once in init() to vertex buffer, instead on every redraw:

            glBufferData(GL_ARRAY_BUFFER_ARB, nv*posnodatas, posnodata, GL_STATIC_DRAW)

Saturday, January 21, 2012

Tutorial #1: Creating animated characters with Blender and rendering them with AnimKit on iOS

So far, I have been writing posts describing how I figured, ported or implemented something. I got a few questions about how to use the code, so I plan to write short tutorials from time to time.

First post is about my findings during creating animated mesh and getting it rendered on iPhone using Opengl ES 2.0 and AppAnimKit. Note that the code should work also on Nokia N9 (Harmattan MeeGo), Ubuntu, Mac OS X.

Here is the result of the steps described in the tutorial:

Here is what I did:

1) downloaded the mesh. 
I have downloaded this one from Note that there are many cute models available there, and also note that they are not for commercial use. I would like to credit the author, but the only thing I could find is that the model is uploaded 3 years ago. If someone knows more about the author of or if it is possible to get author name from 3ds file,… appreciate the help.
The columbine.3ds model contains a mesh, no armature and is not not animated. With ~ 100k vertices it was rather too complex to render on N9 and iPhone. I imported it to Blender from 3ds file.
The model I'm using is using vertex paint to render the model. Rendering using vertex groups is not yet implemented in AnimKit. UV texture works - if your model get's displayed white, you should pull the latest version - plan to push fix later this week.

2) polyreduce (and triangulate)
I find that Blender 2.49b polyreducer script works better then to apply Decimate modifier - which is the approach if using Blender 2.57. So, open mesh in Blender 2.49b, select it, go to Edit mode, select all vertices or only area you want to polyreduce, right mouse button to open menu, from Scripts submenu select Polyreducer. If using per vertex coloring (like I do in this case) disable "Interpolate vertex colors" as result would look fuzzy. I have applied the script several times and video above presents mesh with ~3000 vertices.

3) Smooth normals
While in Blender 2.49b select to Smooth normals. I'm typing this while commuting and don't know out of my head in which button area the toggle is. I could not find this option in Blender 2.57 and, if not set, AnimKit loader would spend noticeable amount of time on startup computing (smoothening) normals.

4) Create armature
Adding armature to mesh start by placing cursor to a dot inside or under the mesh where you want your topmost bone to appear. For this I returned to Blender 2.57. I use term topmost with the meaning "the parent of all other bones". Press space and select Armature. then extrude bones - there are multiple tutorials available about it. To extrude legs and arms use mirror extruding (Shift+E). With Blender 2.57 this option is hidden in Tools toolbar - go to Armature edit mode, open Tools toolbar (Ctrl+T) and check X Axis Mirror Editing. This enables Shift+E - otherwise Shift+E behaves like standard extruding (when pressing E).

5) Prepare bone influence before parenting armature to mesh
Show bones as envelopes and try to scale envelopes (bone effect area) to cover parts of mesh you wish them to affect. With current version of Animkit it is good not to leave mesh areas that are not under bone influence. I plan to open an issue/push patch about this once I get time to debug it. If later you see parts of the mesh stretched to the mesh centre, it is a sign you need to come back to this step.

6) parent the mesh to the armature. Select automatic weights. 

7) Start creating animation (create action)
Go to pose mode. Open one DopeSheet view and switch to Action Editor. Add an action. Make sure timeline cursor is on frame 1.

8) Add key 
rotate and translate tail bone to extreme position on the left. Press I to insert key.

9) Add one more key
move the time slider and position the tail as in initial position. Press I to add another key.

10) Add one more key
add symmetrical position to position in step 8). It is an extreme position with tail on right.

11) Finish animation editing (more keys)
copy 9) and 8) keys to get full cycle of tail going 8(left) -> 9(down) -> 10(right) -> (down) -> (left). After this I opened timeline view, set animation length to length of the cycle in 11) and could see animation rendered in Blender.

12) use file from C++ code.
I saved the file and changed the demo code from Blu.blend to use my mesh. Don't forget to add the file to build target (has to be in the bundle).

13) Contributing patch upstream (gamekit)
There were few minor issues as initially I could not see animation running. It was a nice experience to start collaboration with Ervin and Xavier from gamekit project - getting a minor fix for animation not playing in AnimKit upstream code: just opened an issue and attached the patch. Fix got in AnimKit soon after that. Few other related patches I plan to "pull" to later this week, but the code there should already be OK.

Thursday, December 15, 2011

Skeleton and pose character animation using AnimKit

Objective: How to implement pose and skeleton character animation in OpenGL ES 2.0?

I don't want to use gamekit with OGRE 1.8 rendering engine - need something lighter. Don't see this as "reinventing the wheel" - though there is nothing wrong with "reinventing the wheel" when it is fun, when you enjoy doing it. Hunter S. Thomson typed copies of Hemingway's books - he enjoyed doing it and he knew why he was doing it. I believe he wasn't having a feeling he would live for 1000 years - so, why not waste a bit of time... IMHO, doing this is the best/fastest way for me to learn and it is fun. Guess one could say that I could also type copy of Hunter S. Thomson articles, before continuing, to make this blog more fun to read.

Anyway, I think this is just what I need - gamekit's AnimKit. Briefly checked the code and run it on my desktop: it supports vertex (pose and morph) and skeleton animations, animation blending... inverse kinematics in the roadmap. Grepped for MATRIX_PALETTE - seems to have software (on CPU) animation implemented only. There is nice demo application AppAnimKitGL which looks quite promising. It is dependent on libGLU, libGLEW and libGLUT, and with fixed rendering pipeline. So, there is some effort needed to port it to iPhone and N9 - but I did this work in previous posts/examples so it should go faster now. Let's see. Plan to first focus on opengles 2.0.
Had some time during the weekend to check this. Here is how it looks on Nokia N9 running next to the AnimKit demo on Ubuntu:

Note that I did not port all of the demo helper features (bones, normal rendering, etc). Difference in texture color on N9 comes from using loaded BGR texture as RGB - if you plan to use the same code, you'll probably use different Blender model and different texture loader.
Code is available here:
To run it on Nokia N9, open Samples/AnimKitGL/ with QtCreator and just build&deploy. If you intend to run original demo on desktop, note that you'll need to have Blu.blend in current directory.

Next thing I plan to do is add (copy from previous project) iPhone/iPad project support files.


iPhone version

Note that the fragment shader is different from N9 version - uncommented the code that is darkening the borders and doing the phong (giving the cartoon shading look).

Started from XCode, File->New Project->OpenGL ES Application (iPhone). After this, added all the sources to the project, replaced all "in project" project includes using <> with "" (e.g. #include "Mathematics.h") and added OPENGL_ES_2_0 to Preprocessor Macros item in Target Info dialog. Added also code to printout frames per second info.

This made the things compile, but when run I couldn't see anything rendered. Both use the same memory layout (little endian) so that wasn't a problem. Turned out that with 15000+ indices using glDrawElements with GL_UNSIGNED_INT indices doesn't work. GL_UNSIGNED_SHORT does.
Put a quick fix in the code with following comment:

// in iOS 4.1 simulator, glDrawElements(GL_TRIANGLES, cnt, GL_UNSIGNED_INT did not work - did not show anything on screen
// while GL_UNSIGNED_SHORT works. Using this as temporary workaround until trying with new sdk or anyway, change types
// in animkit code (no reason to use unsigned int anyway).

This got the scene rendered, and only thing left to do was to add depth buffer support (as default XCode SDK skeleton code doesn't have it on).

Code for the example running on iPhone (note that I was using iOS SDK 4.1 - yep, plan to update soon...)  Few fixes added later here:
Plan to merge it to master after verifying changes don't break anything on N9.

Next thing to try is scene with multiple actors with animated skeleton (I think I'll do cartoon animals) hardware (on GPU) matrix palette skeleton animation.

Tuesday, December 6, 2011

Collision detection for character - hit character falls down as rag-doll

Objective: how to implement collision detection for animated character? Character has bone system and it's position and movement is controlled by pose and animations but once it gets hit it needs to fall down like a rag-doll.

Get character model to ReadBlend: There is no code change needed for this part - if you have existing model with character, open project's blend file (PhysicsAnimationBakingDemo.blend) and use Blender's File->Append or Link command. Once you select blender file to "append" character model, select Objects and then selects meshes, armatures or constraints to import. I tried with this file and it worked - got the static character displayed in the demo. However, with ~300 triangles per each piece it was needed to reduce number of vertices not to let it affect frames per second count. If you hit an error "Error Not A Library" in Blender when appending, usually it is related to incompatible versions - open biped-rig blend file, save it (overwrite) and then try append from it again.  

Once I got biped_rig's character displayed next to the tree, it just didn't look nice. Decided to do my own made from boxes; it didn't take long to make head, spine, pelvis, upper and lower legs and arms, feet (and hands) from boxes used in the scene. When model is created, assuming all body pieces are marked as default "Static" (and not as "Rigid body") in Blender, character made of boxes gets displayed in the demo in standing position, not affected by gravity but with ball bouncing against it. If body parts would be marked as "Dynamic", ball would push them. So, for the character body parts, controlled by animation proper mode is Dynamic. Once the contact with bullet is detected, we will just "switch" them to be affected by gravity and they'll fall down.

Modeling joint constraints in Blender: to get "more realistic" rag-doll joint constraints, I need btConeTwistConstraint and btHingeConstraint and to set limits. Check the details for joint constraints example from Bulletphysics Ragdoll example code. I didn't set the limits in Blender 2.49, which you can tell from the demo bellow. When needed, constraints could be tweaked in C++ code (BulletBlendReaderNew::convertConstraints()). Anyway, just put joint constraints between body parts (this tutorial might help) to Blender file and they work also on phone.

Turn on the gravity for character body parts - when character is hit, it needs to fall down like a rag-doll: just after stepSimulation(), call processContactData() to check collision between ball and character:

void BulletBlendReaderNew::processContactData()
    //Assume world->stepSimulation or world->performDiscreteCollisionDetection has been called
    int numManifolds = m_destinationWorld->getDispatcher()->getNumManifolds();
    for (int i=0;i<numManifolds;i++)
        btPersistentManifold* contactManifold =  m_destinationWorld->getDispatcher()->getManifoldByIndexInternal(i);
        btCollisionObject* obA = static_cast<btCollisionObject*>(contactManifold->getBody0());
        btCollisionObject* obB = static_cast<btCollisionObject*>(contactManifold->getBody1());

        int numContacts = contactManifold->getNumContacts();
        if (!numContacts)

// check if one of colliding shapes is ball
        if (obA != ball && obB != ball)

// this is simplified check if another collision shape is part of character's armature
        if (obA->getCollisionShape()->getUserPointer() == &(this->armature)
                || obB->getCollisionShape()->getUserPointer() == &(this->armature))
            // convert armature -> ragdoll in case ball hits character

void BulletBlendReaderNew::turnToRagdoll(ArmNode *nodeList)
    ArmNode *next = nodeList;
    while(next) {
        if (next->piece)
        next = next->nextSibling;

Note that all the body parts' meshes are marked as "Dynamic" in Blender's Logic panel (F4). Dynamic bodies have setAngularFactor set in ReadBlend set to 0.f initially in BulletBlendReaderNew::createBulletObject().

In case of complex sophisticated character meshes, it makes sense to use the same technique, and not only on constrained systems (like iPhone and N9) - don't add complete mesh to physics world for collision and gravity simulation, but create "shadow" model out of boxes only for physics and collision. Then, when rendering rigged mesh, use motionState to get location (transformation) of each box (bone in simplified armature in physics simulation world) and then apply transformation to complex rigged character's bones when rendering it. Simple physics model would not be rendered, it would just exist in physics world.

Monday, November 28, 2011

Tree collision demo - getting the texture and color to N9

I didn't use textures from demo after porting to N9:
To get the textures work if using jpeg library (if not using iPhone SDK to generate images - code under #ifdef USE_IPHONE_SDK_JPEGLIB) it is needed to change m_pixelColorComponents from GL_RGBA to GL_RGB here.
It seemed like a good idea not to texture the tree, but to supply color via uniform value when rendering the  tree per instance. I plan to check how the tree looks with very few leaves and for that ngPlant leaves group probably would be better to use textures (billboards). For example about how to use uniform color value in shader, just grep the code for u_Color - there are several available.
This is how it looks on N9 - sorry for the poor video quality.