Motor Behavior Laboratory Info

Bimanual Prehension

Although bimanual skills are common in daily performance (i.e., opening a cupboard door with one hand while grasping a mug with the other), these skills impose complex constraints on our motor control system. In particular, at the planning level, the brain must organize two separate but coordinated movements. Further, at the sensory level, attention must be divided between two spatially separated targets. Given the functional significance of bimanual movements (i.e. we use our two hands to accomplish a myriad of tasks), understanding how the brain solves this type of movement challenge is important for our overall understanding of human motor behavior. Further, recent interest in bimanual training for the therapy and rehabilitation of movement disorders such as stroke and cerebral palsy necessitates a fundamental understanding of this type of movement control in the healthy population.

Current projects in the motor behavior lab center around understanding the coordination of the two hands in discrete tasks, such as reach-to-grasp movements, where a purposeful goal is being achieved. To understand how the brain plans and controls upper limb movements, we are manipulating characteristics of the targets, such as target size or location; characteristics of the task, such as movement goal and complexity (i.e. aiming versus grasping); and environmental characteristics, such as the availability of sensory information. By manipulating these input characteristics and measuring the motor output, inferences can be made about how the brain uses target, task and environmental information for the planning and on-line control of the separate movements being made by the two limbs.

Recent Published Works:

  1. Mason, A.H & Bruyn, J.L (2009) Manual asymmetries in bimanual prehension tasks: Maniuplation of object distance and object size. Human Movement Science
  2. Bruyn, J.L. & Mason, A.H. (2009). The allocation of visual attention during bimanual movements. Quarterly Journal of Experimental Psychology
  3. Mason, A.H. (2008) Coordination and control of bimanual prehenson: Effects of perturbing object location. Experimental Brain Research
  4. Mason, A.H., & Bryden, P.J. (2007). Coordination and concurrency in bimanual rotation tasks when mvoing away form and toward the body. Experimental Brain Research
  5. Mason, A.H. (2007). Performance of unimanual and bimanual multi-phased prehensile movements. Journal of Motor Behavior

Future Directions for Bimanual Performance Research:

Research in the human motor behavior lab over the next five years will focus on further investigating the influence of target, task and environmental characteristics on bimanual performance. In particular, we are using eye movement recording techniques in combination with progressive perturbations of target direction to further investigate how visual guidance influences movement coupling. Further, in collaboration with Dr. Pamela Bryden (Wilfred Laurier University, Canada) we are investigating how the task characteristics of endpoint congruency (i.e. both hands ending in the same versus different orientations) and final goal influence coordination in a sequential bimanual rotation task. We have also begun to look at how bimanual coordination changes as a function of development in children (Mason, Bruyn & Lazarus, to be submitted, Exp. Brain Res). Finally, we have recently begun collaborating with Dr. Leigh Ann Mrotek of UW–Oshkosh on projects related to understanding coordination as objects are passed between people.

Performance in Virtual Environments

The second line of research being pursued in the motor behavior lab, which was funded in 2003 by a five year Career Award from the National Science Foundation, explores how the availability of visual information affects the performance of movements in virtual environments (VE). Like a standard desktop computer system, a VE consists of a human operator, a human-machine interface, and a computer. In contrast to a standard desktop computer, the displays and controls in the VE are configured to immerse the operator in a predominantly graphical environment containing three-dimensional objects with locations and orientations in three-dimensional space. The operator can interact with and manipulate virtual objects in real time using their hands as input devices. The ultimate purpose of VEs is to provide the user with a computer-based tool in which a variety of common and novel activities can be performed in several promising areas such as scientific visualization, medical diagnosis, surgical training and, educational tools. Although many technological advances have been made in each of these applications, VEs are still cumbersome and difficult to use. We operate under the hypothesis that virtual environments will only reach their full potential when we have a deeper understanding of how the human user functions in this artificial environment. Therefore, our research has focused on the systematic study of how humans use visual information to perform simple and complex skills in VEs.

Recent Published Works:

  1. Mason, A.H & Bernardin, B.J. (2008) The role of visual feedback when grapsing and transferring objects in a virtual environment. Proceedings of the 5th International Conference on Enactive Interfaces
  2. Mason, A.H & Bernardin, B.J. (2007). The role of early or late graphical feedback about oneself for interaction in virtual environments. Proceedings of the 9th Virtual Reality International Conference
  3. Mason, A.H.(2007). An experimental study on the role of graphical informaiton about hand movements when interacting with objects in virtual reality environments. Interacting With Computers

Future Directions for Research on Performance in VEs:

Our previous research investigated how young, healthy adults use sensory information when they perform tasks in virtual environments. Although this research provides important information about the normative performance of younger individuals in VEs, this subject pool does not match the demographics of our aging population. We see the possibility for applications of VE technology with a broad range of users of all ages and abilities. In particular VEs can be used by the young for skill development, by the healthy elderly for skill maintenance and by the injured for rehabilitation. However, very little information is available on how performance in VEs changes throughout the lifespan as a function of the natural aging process or processes of brain injury, such as stroke. A 3-year National Science Foundation grant proposal (2009-2012) from our lab designed to study movement control in VEs in a population ranging in age from 6 to 80 years was recently recommended for funding with an anticipated start date of July 31st, 2009. With a broad spectrum of participants, studied using a cross-sectional design, we will gain important knowledge about how performance in VEs is influenced by age. We will use basic motor control measurement and predictive modeling techniques to understand how sensory information is used and to suggest methods for improving the presentation of sensory information to users under a variety of target, task and environmental conditions.

We have also recently been awarded seed funding from the Graduate School to collect pilot data on the use of a virtual reality system for hand function rehabilitation after stroke. With this research we will begin the implementation and pilot testing of a virtual reality motor learning paradigm aimed at improving the ability to control whole-hand and individual finger forces during object manipulation tasks in persons in the chronic stage of stroke. These pilot investigations have the potential to generate important baseline and proof of concept data for an NIH proposal with the long-term project goal of developing a virtual reality system for hand function rehabilitation after stroke.

Journal Publications

  1. Mason, A.H., & Bernardin, B. Vision for performance in virtual environments: The role of feedback timing. International Journal of Human-Computer Interaction, In Press.
  2. Bruyn, J.L., & Mason, A.H. (2009). The allocation of visual attention during bimanual movements. Quarterly Journal of Experimental Psychology, 62(7), 1328-1342. PDF
  3. Mason, A.H. & Bruyn, J.L. (2009) Manual asymmetries in bimanual prehension tasks: Manipulation of object distance and object size, Human Movement Science, 28:48-73. PDF
  4. Mason, A.H. (2008). Coordination and control of bimanual prehension: Effects of perturbing object location. Experimental Brain Research, 188(1), 125-139. PDF
  5. Mason, A.H. & Bryden, P.J. (2007) Coordination and concurrency in bimanual rotation tasks when moving away from and toward the body. Experimental Brain Research, 183(4). 541-556. PDF
  6. Mason, A.H. (2007) Performance of unimanual and bimanual multi-phased prehensile movements. Journal of Motor Behavior, 39(4), 291-305. PDF
  7. Mason, A.H. (2007) An experimental study on the role of graphical information about hand movements when interacting with objects in virtual reality environments. Interacting With Computers. 19, 370-381. PDF
  8. Mason, A.H., & MacKenzie, C.L. (2005) Kinematics and grip forces when passing an object to a partner. Experimental Brain Research, 163(2), 173–187. PDF
  9. Mason, A.H., & MacKenzie, C.L. (2004) The role of graphical feedback about self-movement when receiving objects in an augmented environment. Presence: Teleoperators and Virtual Environments, 13(5), 507-519. PDF
  10. Mason, A.H., & Carnahan, H. (1999). Target viewing time and velocity effects on prehension. Experimental Brain Research, 127, 83-94. PDF
  11. Carnahan, H., McFadyen, B.J., Cockell, D., & Halverson, A.H. (1996). The combined control of locomotion and prehension. Neuroscience Research Communications, 19, 90-100. PDF

Conference Publications

  1. Mason, A.H., & Bernadin, B.J. (2008). The Role of Visual Feedback when Grasping and Transferring Objects in a Virtual Environments. In E. Ruffaldi & M. Fontana (Eds.) Proceedings of the 5th International Conference on Enactive Interfaces, 111-116. PDF
  2. Mason, A.H., & Bernardin, B.J. (2007). The role of early or late graphical feedback about oneself for interaction in virtual environments. In S. Richir & E. Klinger (Eds.) Proceedings of the 9th Virtual Reality International Conference, pp. 121-130. PDF
  3. Mason, A.H., & MacKenzie, C.L. (2002). The effects of visual information about self-movement on grasp forces when receiving objects in an augmented environment. Proceedings of the 10th International Symposium on Haptic Interfaces for Virtual Environments and Teleoperator Systems, 105-112. PDF
  4. Mason, A.H., Walji, M.A., Lee, E.J., & MacKenzie, C.L. (2001). Reaching movements to augmented and graphic objects in virtual environments, CHI Letters – Proceedings of the ACM Conference on Computer-Human Interaction, 3(1), 426-433. PDF
  5. Mason, A.H. & MacKenzie, C.L. (2000). Collaborative work using give and take passing protocols. Proceedings of International Ergonomics Association (IEA2000)/Human Factors and Ergonomics Society (HFES2000) Congress: Ergonomics for the New Millenium, 1, 519-522. PDF

Equipment in the Human Motor Behavior Laboratory

Motion Capture System

Human movement information is collected using two Visualeyez 3000 (PTI Phoenix) camera systems. These systems record the three-dimensional coordinates of light-emitting diodes (LEDs) attached to landmarks of interest on subjects and objects as various experimental tasks are performed. Highly accurate (precision = ~0.8 mm) 3D data can be collected at frequencies of up to 3000 frames/second, depending on the number of LEDs in use. For the purpose of motor behavior research, 3-18 LEDs are used and the collection frequency is generally set between 120-200 Hz.

Force/Torque Transducers

As people manipulate objects, two Nano 17 (ATI) force/torque transducers are used to measure grasp forces at frequencies of 1000 Hz. The grasp force data can be synchronized with the 3-D motion data to provide a more complete picture of these manipulative activities.

Data Analysis Package

We have created a software package (KinSys) for 3-D kinematic data analysis, visualization and management. This software allows us to extract relevant information from the large datasets of kinematic and grip force data generated during data collection in order to distill the effects of sensory information on movement. The software is currently capable of reading data from two leading motion capture systems (Optotrak and Visualeyez) and the functionality of the system continues to be updated based on our needs.