Fifteenth Annual Meeting

Time:             Saturday, March 5, 2005, 8:30 a.m. - 4:30 p.m.

Place:             Lecture Room, Sterling Memorial Library,
                      Yale University
                      120 High Street (entrance on Wall Street), New Haven, CT

Organizer:      Bruno H. Repp, Haskins Laboratories
                       E-mail: repp@haskins.yale.edu


9:00 - 9:25


Jason Biddle, Rajal G. Cohen, Robin Fleckenstein, Steven A. Jax, Robrecht van der Wel, and David A. Rosenbaum
(Pennsylvania State University)
Email: jcb281@psu.edu

How do we control reaches around obstacles? To address this question, we conducted two series of experiments in which participants moved the hand over an obstacle in time with a metronome. Both series of experiments revealed striking sequential effects.

In one set of experiments, subjects hit the base of a hand-held dowel against each of two targets straddling an obstacle. The task was to move back and forth over the obstacle, hitting each target in time with a metronome. We manipulated the minimum possible reach height (i.e., based on a combination of required dowel grasp height and obstacle height) as well as the obstacle height relative to gaze height. In each trial, subjects made approximately 15 passes over the obstacle. We measured clearance, defined as the maximum height of the base of the dowel relative to the height of the two targets (which were of equal size). Minimum possible reach did not have a reliable effect on clearance, but lower clearances were achieved for obstacles at or near gaze level than for obstacles higher or lower than gaze level. The most dramatic outcome was a sequential effect: Clearance decreased in successive passes over the obstacle, at a decreasing rate.

In the other series of experiments, participants hit the base of a hand-held dowel on each of a series of targets laid out in a horizontal semi-circle. An obstacle of variable height stood between different pairs of targets. Under the requirement that the base of the hand-held dowel successively hit adjacent targets, subjects demonstrated another sequential effect: After jumping over an obstacle, the height to which they brought the dowel in the subsequent jumps decreased, but only gradually. This perseveration effect scaled with obstacle height. Meanwhile, there was little or no anticipatory effect: Jump heights hardly changed as the hand approached an obstacle and were negligibly different from jump heights in the no-obstacle control condition. To further investigate the sequential effect, we also performed an experiment using two obstacles as well as one. The data from the latter experiment were being analyzed at the time this abstract was being prepared.

9:25 - 9:50


Rajal G. Cohen and David A. Rosenbaum
(Pennsylvania State University)
Email: rajal@psu.edu

Are stillness and movement qualitatively or only quantitatively different? Results of three investigations favor the view that they are only quantitatively different.

1. When participants attempted to hold still, the tiny hand and arm movements they made (postural tremor) conformed to the 2/3 power relation between curvature and tangential velocity, as observed previously with large-scale intentional movements (Viviani & Terzuolo, 1982).

2. When participants alternated between holding still and carrying out large-scale spatially directed hand and arm movements, the scatter plot of the end effector's instantaneous change of direction relative to the end effector's instantaneous speed was well fit with a single uninflected function.

3. When participants alternated between holding still and carrying out large-scale spatially directed hand and arm movements (as in 2), end effector speed in the holding-still period was more variable in each axis when the target of the anticipated movement was in that axis than when the target of the anticipated movement was in the orthogonal axis.

9:50 - 10:15


Amanda Dawson and David A. Rosenbaum (Penn State University)
Email: jamd341@psu.edu

In a series of experiments we demonstrate that neurologically normal individuals can move their hands independently with little or no training. We obtained this result by employing a bimanual haptic pursuit tracking task.

In Experiment 1, paddles that were to be continually contacted with the left and right index fingers were moved in quasi-random patterns either by one or two human drivers. Subjects were able to maintain contact with the paddles equally well in both conditions despite the independent movements required of their hands in the two-driver condition.

Experiment 2 replicated this effect with paddles that were moved in either a circular or square pattern. The movement pattern of one paddle matched or mismatched the movement pattern of the other paddle. Nevertheless, subjects generated the phasing, frequency, and spatial position required to track the paddles, even when this meant making a circle with one hand while making a square with the other.

We conclude that haptic tracking enables subjects to move their hands independently because it reduces the need for centrally generated plans for extended sequences of behavior. Hand dependence, we suggest, occurs when actors try but fail to maintain two separate movement plans in working memory. This view contrasts with the possibility that hand dependence results from neural cross-talk in motor outflow. Our results are discussed in relation to other cognitive theories of bimanual coordination, including Mechsner's (2004) perceptual-cognitive approach and Franz, Zelaznik, Swinnen and Walter's (2001) task-concept hypothesis.

10:15 - 10:40


Jessica Ward1,2 and Daniel J. Levitin1
(1McGill University, 2Goldsmiths College, London)
Email: psp01jw@gold.ac.uk

Previous research into polyrhythmic tapping has primarily concerned itself with single individuals tapping with their two hands, and has seen any problems arising as primarily motor-program problems. Polyrhythmic tapping systems have historically included physical linkage, and investigations have focused on the nature and properties of that physical linkage. However, recent work has begun to acknowledge the importance of an "informational" or cognitive linkage in multieffector systems. Here, we used a finger tapping task to explore this phenomenon.

We report four experiments in polyrhythmic tapping in which the only linkage between the effectors is informational, specifically, auditory and visual cues. There was no physical linkage because the two hands involved in the polyrhythmic tapping belonged to two different participants. In two of these experiments, participants were unable to remain independent of one another, and made the same errors in tapping as would a single-person bimanual system, while in two of the experiments they were able to remain independent. The differences appear to depend on whether the task is presented as warranting coupling or independence in the context of music-making. Interestingly, there was no effect of visual cue condition on performance in any condition. These results provide evidence that coupling can and does occur as a result of auditory informational linkage, but indicate that we are able to ignore the information when we judge that it would be to our musical advantage to do so.

10:40 - 11:10

Coffee break

11:10 - 11:35


Dagmar Sternad (Pennsylvania State University)
Email: dxs48@psu.edu

The influence of a limb's resonance or preferred frequency was investigated in two rhythmic tasks. In a rhythmic tracking task participants followed a sinusoidal visual target by swinging one of three hand-held pendulums of different resonance frequencies. In a continuation task participants first synchronized with a target frequency for 20 s and then continued their oscillation at the same frequency for 60 s. Seven metronome frequencies were presented that were equal, higher, or lower than the individually preferred frequency, thus creating different degrees of symmetry between the target and the preferred movements. Results showed that (i) the preferred frequency scaled linearly with the pendulum's resonance frequency; (ii) relative phase between target and tracking trajectory systematically phase led or lagged as a function of the symmetry between resonance and target frequency; (iii) variability was minimal at the preferred target frequency and increased with deviations from symmetry. Results of the continuation experiment showed that (iv) during continuation the cycle frequency drifts towards the preferred frequency in an exponential fashion; (v) the larger the difference between target and the limb's resonance frequency, the larger the drift; (vi) variability displays a U-shaped function with minimum variability for the symmetry condition. A three-layered model was developed that consists of an internal "neural" oscillator coupled to a "mechanical" pendular limb via equilibrium point control. The observed period drifts and phase differences that scaled with the asymmetry between target frequency and pendular resonance frequency were reproduced qualitatively and quantitatively.

11:35 - 12:00


Ramesh Balasubramaniam (University of Ottawa)
Email: ramesh@uottawa.ca

Studies of movement timing often employ repetitive movements of the finger performed in time to a metronome beat. Previous work has looked at (a) movement trajectories of repetitive movements or (b) timing errors made with respect to the beat. The question of what kind of movements need to be produced to preserve accurate timing remains unanswered. In an experiment involving synchronization or syncopation with an external auditory metronome, we show that the nervous system produces trajectories that are asymmetric with respect to time and velocity in the out and return phases of the repeating movement cycle. This asymmetry is task specific and is independent of motor implementation details (finger flexion vs. extension). Additionally, we found that timed trajectories are less smooth (higher mean squared jerk) than unpaced ones. Negative correlations were observed between synchronization timing error and the movement time of the ensuing return phase suggesting that late arrival of the finger is compensated by a shorter return phase and conversely for early arrival. We suggest that movement asymmetry in repetitive timing tasks helps satisfy requirements of precision and accuracy relative to a target event. In a second experiment, the movement asymmetry was altered using elastic (position based) and viscous (velocity based) force fields. While elastic force fields had a negligible effect on timing, participants found it very difficult to maintain rhythm in the viscous field. The relative role of position and velocity based information in error detection and correction will be discussed.

12:00 - 12:25


Howard Zelaznik1, Rebecca S. Spencer2, and Richard B. Ivry2 (1Purdue University, 2University of California, Berkeley)
Email: hnzelaz@purdue.edu

After briefly reviewing our previous work on the event-emergent timing model that we have been exploring, we report a new experiment that suggests that in order to begin a timing task that is emergently timed, a representational timing system, i.e., event timing, is used to get the movement into the proper temporal ball park. Subjects (n = 84) were assigned to one of three timing groups. Group One performed only one circle or one tapping interval per trial. Group Four produced four continuous circles, or four tapping intervals per trial. Group Pause produced one interval, paused for two intervals and then produced the last interval. The duration of an interval was 500 ms. We found that the first and last interval of circle drawing exhibited elevated temporal variability. Furthermore, individual difference correlations showed that the first interval of circle drawing was correlated with each of the tapping intervals. We take these results as initial support for a transformation hypothesis: Timing starts off as event like and then emergent timing is invoked.

12:25 - 1:30

Lunch (provided)

1:30 - 1:55


John W. Moore, Robert J. Polewan, and Christopher M. Vigorito
(U. of Massachusetts)
Email: jwmoore@psych.umass.edu

Cartesian reflexes are anticipations and false alarms to commands to blink or make some other voluntary response. The term comes from Descartes' 1649 observation about involuntarily blinking when a friend thrusts his hand toward our eyes, even though we know the friend would not actually strike us. Such anticipations are akin to classically conditioned eye blink responses. We embedded commands to blink within pictures of faces and examined response times (RTs). Our standard protocol involved repeated presentation of two faces and two simple geometric shapes or forms. Each of these four `CSs' were presented for 800 ms/trial. The command to blink appeared on each CS for 100 ms. The intervals between onset of a face or shape and the blink command ranged from 50 to 400 ms in various experiments. In addition to commands to blink embedded in CSs, the blink command appeared alone on the computer monitor. The RTs provided a measure of the `processing cost' (PC) of attending to a CS. Subjects who attended to the CS were generally slower to blink on command. PCs were significantly greater for faces than for shapes and larger for intervals of 200 ms than 400 ms. Furthermore, the difference in PCs for faces and shapes was inversely related to subjects' ratings of how well they are at telling what someone is thinking by looking at their face. We obtained the same pattern of results with a mouse-click response instead of a blink command. Results suggest that attention to faces imposed a switching cost that resulted in longer RTs to embedded commands and/or increased RTs by shortening time estimation through an attentional-gate mechanism.

1:55 - 2:20


Amandine Penel, Christopher A. Hollweg, and Carlos D. Brody
(Cold Spring Harbor Laboratory, NY)
Email: enel@cshl.edu

We used an analog reporting method to investigate the processing of temporal patterns, which involves the translation of temporal information into a visuospatial representation. Based on the assumption that the participants' visuospatial report is an accurate reflection of their mental representation of the temporal pattern, the method provides a quantitative and direct measure of the mental representation.

Participants heard sequences of three brief tone pips, spanning a total of 1 or 1.2 sec from first to last tone (blocked), with the middle tone uniformly distributed within the interval. After each sequence, they had to place a vertical line within a horizontal bar symbolizing the sequence, at a position that represented the time when the middle tone occurred. We found that stimuli with middle tones that were within +/-10% of the midpoint (i.e., near 0.5 sec in the 1 sec sequences, or 0.6 sec in the 1.2 sec sequences) were reported as if they had occurred at the midpoint itself (i.e., assimilation). A subdivision of the total interval into equal parts thus seemed to correspond to a perceptual category: Response variability was maximal at the boundaries of the assimilation zone, and a contrast effect was observed immediately beyond that zone (i.e., participants exaggerated the tone's deviation from the temporal midpoint). If the method indeed captures the mental representation of the temporal patterns, it should be possible to validate our findings in experiments not involving visuospatial responses. We performed a second experiment in which a classical 2AFC task was used. Participants had to compare two auditory sequences, each consisting of three tone pips, and decide whether in the second sequence, compared to the first sequence, the middle tone was played earlier or later. As predicted by the visuospatial data of the first experiment, local maxima in performance were observed near the assimilation zone's boundaries in this purely auditory experiment when one sequence fell outside the assimilation zone while the other sequence fell within it.

2:20 - 2:45


Scott W. Brown and Steven R. Usher (University of Southern Maine)
Email: swbrown@usm.maine.edu

Much of the research on time and attention utilizes the dual-task paradigm, in which a timing task is performed concurrently with a distractor task. Timing performance typically shows an "interference effect", characterized by increased error and variability relative to control (timing-only) conditions. In some cases, the interference is bidirectional, with the concurrent distractor and timing tasks interfering with one another. The issue of bidirectional interference has important implications for understanding the cognitive psychology of time. For example, mutual interference implies that the tasks rely on the same resources, whereas partial interference implies that the two tasks are less related. The present research was designed to compare interference effects associated with two different distractor tasks. These distractor tasks required that subjects judge pairs of statements. Subjects were tested under both single-task and dual-task conditions. The single-task conditions included a timing task (serial temporal production), a similarity task (judging whether statement pairs were related or unrelated), and a sequence task (judging whether the statements were in the correct or incorrect temporal order). In the dual-task conditions, subjects performed the timing task concurrently with each of the distractor tasks. The results showed a pattern of bidirectional interference, with the distractor tasks interfering with timing performance, and timing interfering with distractor performance. However, the timing task produced a stronger degree of interference on the sequence judgment task compared with the similarity judgment task. This outcome supports the hypothesis that timing and sequencing are closely related processes that rely on the same set of cognitive resources or mechanisms.

2:45 - 3:15

Coffee break

3:15 - 3:40


Steven C. Seow (Brown University)
Email: sseow@brown.edu

It is natural for people to subdivide an isochronous interstimulus interval (ISI) to better remember and reproduce the interval. For example, an ISI of 1000 ms can be mentally subdivided into four subintervals of 250 ms, and a response can be produced at every fourth subinterval. Three experiments were conducted to assess the effects of subdivision on a self-paced continuation tapping performance. In all experiments, participants listened to isochronous metronome-like auditory stimuli (loud ticks) that were separated by a less accented version of the stimuli (soft ticks). When the presentation of these stimuli was completed, participants proceeded to tap on a key to the interval established by the loud ticks. Analysis of the interresponse intervals (IRIs) from the three experiments showed that self-paced continuation tapping was closest to the ISI when length of the subdivision was 400 ms. These results support prior findings of a privileged point between 300 and 400 ms that leads to optimal motor performance in finger tapping.

3:40 - 4:05


Michael Hove (Cornell University) and Peter E. Keller (University of Management and Finance, Warsaw)
E-mail: mjh88@cornell.edu

Ensemble music performance typically requires synchronization with other musicians; this results in target sequences consisting of chords containing multiple tones and multiple onsets. Experiments 1 and 2 investigate whether sensorimotor synchronization with chord sequences containing tone-onset asynchronies is affected by (a) the magnitude of these asynchronies (large or small) and (b) the pitch of the leading tone (low or high). Participants tapped a finger in synchrony with five different types of chord sequences. Each sequence consisted of a single chord made up of a high and a low pitch tone; tones were either presented simultaneously, or the low tone preceded or followed the high tone by 25 or 50 ms. Results indicate that taps were drawn toward the second onset in each chord, especially when it was lower in pitch than the first onset. Experiment 3 measured the perceptual centers of the five chords. These results indicate that P-centers may largely account for the synchronization timing differences in Experiment 2.

4:05 - 4:30


Bradley W. Vines1, Carol L. Krumhansl2, Marcelo M. Wanderley1, and Daniel J. Levitin1 (1McGill University, 2Cornell University)
E-mail: bvines@po-box.mcgill.ca

This talk presents research into cross-modal interactions in auditory and visual perception of naturally occurring stimuli. Two experiments explored the multi-modal experience of observing musical performance by investigating (1) the real-time experience of musical form and musical emotion, and (2) the multidimensional structure of affective responses to musical performance, as a function of performers' expressive intentions. Musically trained participants saw, heard, or both saw and heard, performances of a Stravinsky piece for solo clarinet. We collected continuous judgments (made in real-time with a sliding potentiometer, as performances were presented) and discrete judgments of emotional content (made on a Likert scale after each performance was presented). The data analyses included traditional statistics and Functional Data Analysis to explore the continuous measurements, and Factor Analysis to identify the major dimensions of affective impact, as revealed by the discrete judgments. This work has quantified the ways in which auditory and visual components of musical performance contribute singly and in interaction with one another to influence the overall experience. The studies show that seeing a musician perform augments, complements and interacts with hearing to significantly influence music perception. These results are relevant to, and inform theories on, multi-sensory integration, emotion, and music cognition, as well as performance practice and audio-video media.

5:00 - 6:20

Drinks at Bar (254 Crown Street)

6:30 - 8:30

Dinner at Bentara restaurant (76 Orange Street)