7

In my OpenGL research (the OpenGL Red Book, I think) I came across an example of a model of an articulating robot arm consisting of an "upper arm", a "lower arm", a "hand", and five or more "fingers". Each of the sections should be able to move independently, but constrained by the "joints" (the upper and lower "arms" are always connected at the "elbow").

In immediate mode (glBegin/glEnd), they use one mesh of a cube, called "member", and use scaled copies of this single mesh for each of the parts of the arm, hand, etc. "Movements" were accomplished by pushing rotations onto the transformation matrix stack for each of the following joints: shoulder, elbow, wrist, knuckle - you get the picture.

Now, this solves problem, but since it's using old, deprecated immediate mode, I don't yet understand the solution to this problem in a modern OpenGL context. My question is: how to approach this problem using modern OpenGL? In particular, should each individual "member" keep track of its own current transformation matrix since matrix stacks are no longer kosher?

seveland
  • 181
  • 12

3 Answers3

4

Pretty much. If you really need it, implementing your own stack-like interface is pretty simple. You would literally just store a stack, then implement whatever matrix operations you need using your preferred math library, and have some way to initialized your desired matrix uniform using the top element of the stack.

In your robot arm example, suppose that the linkage is represented as a tree (or even a graph if you prefer), with relative transformations specified between each body. To draw the robot arm, you just do a traversal of this data structure and set the transformation of whichever child body to be the parent body's transformation composed with its own. For example:

def draw_linkage(body, view):

    //Draw the body using view matrix

    for child, relative_xform in body.edges:

        if visited[child]:
            continue

        draw_linkage(child, view * relative_xform)
Mikola
  • 8,569
  • 1
  • 31
  • 41
1

In the case of rigid parts, connected by joints, one usually treats each part as a individial submesh, loading the appropriate matrix before drawing.

In the case of "connected"/"continous" meshes, like a face, animation usually happens through bones and deformation targets. Each of those defines a deformation and every vertex in the mesh is assigned a weight, how strong it is affected by each deformators. Technically this can be applied to a rigid limb model, too, giving each limb a single deformator nonzero weighting.

Any decent animation system keeps track of transformations (matrices) itself anyway, the OpenGL matrix stack functions are seldomly used in serious applications (since OpenGL had been invented). But usually the transformations are stored in a hierachy.

datenwolf
  • 149,702
  • 12
  • 167
  • 273
1

You generally do this at a level above openGL using a scenegraph.

The same matrix transforms at each node in the scenegraph tree just map simply onto the openGL matrices so it's pretty efficient.

Martin Beckett
  • 90,457
  • 25
  • 178
  • 252