biomechanics

paper on human hand biomechanics

Admitted, the paper is not so new: it was written in 2011 and the Springer publication path is particularly careful. But it is out, in the 2014 STAR book “The Human Hand as an Inspiration for Robot Hand Development“.

The paper, written by Georg Stillfried, lays out a highly accurate method to measure the kinematics of the human hand. The model is based on magnetic resonance images of the hand—a convoluted way of getting a model, but it does give you a rather accurate position of the bones of the hand in various positions.

Screen Shot 2014-01-09 at 12.15.23Why would one want to do this?  There is a good reason: our fingers are optimised to grip. To hold an object, and manipulate it once it is in the hand.  To do that we want to (a) exert enough force without too much effort and without discomfort, and (b) get optimal sensory, cq. tactile, feedback.  This is done by putting the “flat” part of our fingertips together.  To do that, our thumb is constructed to move towards the fingertips of the four fingers; but also the fingers are constructed such that they “move towards”  the thumb.  Your average robotic hand does not do that!  So, we wanted to understand the complete kinematics, in order to solve this problem for once and for all, and to allow robot hand designers to construct more useful robotic hands.

To gather the data, I started doing these recordings in an MRT tube many years ago. It is a tiresome ordeal: lie still with your hand for a few minutes per image, for many images in a row, adding up to many hours of agony. Any idea how loud the inside of an MRT scanner is? Anyway, Marcus Settles, co-author of the paper, needed many adaptations to the settings of his complex machine to get a good separation between bone and non-bone tissue, which was what I was interested in.

Screen Shot 2014-01-09 at 12.09.31I quickly grew tired of scanning, so we got a medical student doing that, who also did the segmentation from bone and surrounding tissue. Again tedious, taking many hours of semi-automatic segmentation. Later on we developed methods for automatic segmentation, not just since the student left but also since manual segmentation is not very reproducible, and the errors introduced there could influence our final results considerably.

Once we had the bones, how can this be used to create a kinematic model of the hand?  Georg worked with a object localisation method developed by Uli Hillenbrand, which finds known objects (i.e., bones from a reference image) in other (3D) images.  Tricky is finding the right rotation along the long bone axis, but ample parameter tuning solved that issue.

Ending up with a number of bones, the work was then completed by parameter optimisation. In the model, a multi-DoF (robot-type rotary) joint is defined between each pair of subsequent joints, and that joint must be defined with respect to its exact location, and which of its axes is rotated about.

The model ends up with 25 degrees of freedom.  That is nice: looks like 5 per finger. The joints, as said, are robotic-type joints, not the original sliding joints you find in biology (another paper by Georg focuses on that).

Enjoy our “new” paper here.

 

Screen Shot 2014-01-09 at 12.22.38

 

Note that the study is not complete. What we left out is the influence of passive movement. If you exert forces with your fingers, their positions change more and more as you increase forces, or when objects are moved in our hands. We understand that there is an exponential force–position relationship (see also this previous post) and that will give you a working hypothesis, but these measurements do not include such data.

Standard

Leave a Reply