We investigated the real-time cascade of postural, visual, and manual actions for object prehension in 38 6- to 12-month-old infants (all independent sitters) and eight adults. Participants' task was to retrieve a target as they spun past it at different speeds on a motorized chair. A head-mounted eye tracker recorded visual actions and video captured postural and manual actions. Prehension played out in a coordinated sequence of postural-visual-manual behaviors starting with turning the head and trunk to bring the toy into view, which in turn instigated the start of the reach. Visually fixating the toy to locate its position guided the hand for toy contact and retrieval. Prehension performance decreased at faster speeds, but quick planning and implementation of actions predicted better performance. [ABSTRACT FROM AUTHOR]
© 2001-2024 Fundación Dialnet · Todos los derechos reservados