NEW ENGLAND SEQUENCING AND TIMING (NEST)

 

Seventeenth Annual Meeting

 

Time:          Saturday, March 17, 2007, 8:30 a.m. – 5:00 p.m.

 

Place:          Haskins Laboratories, 300 George Street,
                     New Haven, CT

 

Organizer:  Bruno H. Repp (e-mail: repp@haskins.yale.edu)

 

Assistant:    Minjung Son

 

NOTE: This meeting had to be canceled because of a severe winter storm that prevented most people from attending. Instead, two mini-conferences took place among people stranded in New Haven and Storrs, respectively. The former featured the scheduled talks by Rajal Cohen, Molly Henry, and Scott Brown, as well as impromptu presentations by David Rosenbaum and Devin McAuley. The latter included the scheduled talks by Howard Zelaznik, Paula Silva, and Stacy Lopresti-Goodman.

 

PROGRAM

 

8:30 – 9:00 Continental breakfast

9:00 – 9:25

IS RELATIVE TIMING REALLY "IN" THE GENERALIZED MOTOR PROGRAM?

Howard N. Zelaznik, Oh-Sang Kwon, and  Zygmund Pizlo (Purdue University)

E-mail: hnzelaz@purdue.edu

 

Since the seminal paper of Schmidt (1975, Psychological Review), the search for invariants in motor output has been an important area of research.  From the information processing perspective, the generalized motor program construct is purported to contain relative timing in it.  In other words, when a performer speeds up or slows down a well-practiced task, the relative timing of the task components stays the same, although the overall duration changes.  In most of the experiments that claim to support relative timing invariance, the subject when speeding up or slowing down is not very constrained spatially.  In the present experiment, we examined the timing of a practiced skill relative to Fitts’ Law spatial constraints.  Subjects performed a sequential Fitts’ Law task in which 10 targets were hit in succession making a polygon shape.  Subjects then were transferred to a larger or smaller polygon set.  Results showed that only when the relative index of difficulty was maintained was transfer maximized, but relative timing in general was not maintained.  We suggest that previous studies showing relative timing invariance might be due to not constraining the speed-accuracy aspects of the task.

 

9:25 – 9:50

SKILL LEARNING AND REFINEMENT IN A REDUNDANT TASK: MINIMIZING TIMING ERRORS WITH AN "EQUIFINAL TRAJECTORY"

Rajal Cohen and Dagmar Sternad (Pennsylvania State University)

E-mail: rajal@psu.edu

 

Throwing is performed too quickly for online sensory feedback to be used to determine the execution variables. Rather, subjects must learn over repeated throws to choose particular combinations of release time and velocity that are successful. Müller and Loosch (1999) suggested that as subjects become familiar with the task, conditions are established such that timing precision becomes less critical to performance. The equifinal trajectory is an ideal succession of angles and velocities that comprise a long window of time during which releasing the ball at any moment would lead to success. According to the equifinal trajectory hypothesis, subjects approach this equifinal trajectory as they become more skilled at the task. In contrast, Smeets et al. (2002) argued that with more practice subjects adopt strategies that require greater timing precision. The present study examined performance in a throwing task to test these contrasting hypotheses. Subjects threw a pendular projectile to a target in a virtual set-up. They performed 180 throws per day over six days. The equifinal trajectory hypothesis was evaluated by computing the RMS error between each arm trajectory and the equifinal trajectory during a window of 25 ms around the actual moment of release. The timing precision hypothesis was evaluated by calculating the optimal release moment for every trajectory, and quantifying how often subjects released within a 7 ms window around that moment. Results indicate that with practice, subjects tended to approach the equifinal trajectory. Releases within a 7 ms window of the optimal release moment were rare. The present data support the equifinal trajectory hypothesis rather than the timing precision hypothesis. There may be critical differences between two apparently similar tasks that can lead a subject to optimize performance in different ways. A profitable further line of inquiry will be what, exactly, those differences are.

 

9:50 – 10:15

HOW DOES STEADY-STATE FORCE PRODUCTION BY ONE HAND AFFECT THE OSCILLATION DYNAMICS OF THE OTHER?

Paula L. Silva1, Marisa C. Mancini2, Sergio T. Fonseca2, Miguel Moreno1, Michael T. Turvey1 (1University of Connecticut; 2 Federal University of Minas Gerais)

e-mail: paula.silva@huskymail.uconn.edu

 

During the performance of motor activities, such as walking or running, muscle forces generated in one body segment can affect the motion and position of segments far removed from it. As a result, the musculoskeletal system is required to deal with the resulting stresses caused by the flow of forces through the kinetic chain. The objective of the study was to describe the effects of a tonic force produced at one wrist on the observed kinematic pattern and oscillatory dynamics of the rhythmic movements generated by the opposite wrist. Seven participants were asked to swing a single hand-held pendulum at a comfortable tempo (with amplitude unconstrained) while continuously squeezing a dynamometer with their other hand. The experimental protocol involved 9 different conditions obtained through a combination of three pendulum lengths with three levels of force. Mean amplitude and period were computed from the pendulum displacement time-series obtained in each trial. The oscillatory dynamics was evaluated by a graphical and statistical method developed by Beek and Beek (1988). This analysis revealed a change in the linear and non-linear stiffness functions underlying the rhythmic movements produced by one hand as the level of force generated by the other hand increased. These effects were observed along with increases in the amplitude of oscillation. The results suggest a reorganization of the oscillatory dynamics as a function remote force production. It is possible that this reorganization allows the system to take advantage of the changing context of forces during the performance of the rhythmic movements. Different hypotheses will be discussed regarding the nature of effects observed.

 

10:15 – 10:45 Coffee break

10:45 – 11:10
MEDIAL PREFRONTAL CORTEX AND THE TEMPORAL CONTROL OF ACTION
Mark Laubach (John B. Pierce Laboratory and Yale University School of Medicine)
E-mail: mlaubach@jbpierce.org

Neuroimaging studies in human beings and unit recording studies in primates suggest that medial areas of prefrontal cortex (mPFC) are involved in decision making and action selection. A major issue is how mPFC is involved in deciding when to act. To study this issue, we have used reversible inactivation methods to "knock out" mPFC activity during a simple delayed response task. This manipulation resulted in excessive premature responding and a lack of temporal control over response initiation. When we combined reversible inactivations in mPFC with ensemble recordings in motor cortex, we found that inactivation of mPFC specifically reduced delay activity in motor cortex2. We then recorded neural activity in mPFC during the task and found that one-third of neurons fired persistently during the delay period. Most recently, we have tested the hypothesis that mPFC neurons are sensitive to stimulus timing. When animals learned to respond to stimuli at novel times, the percentage of neurons with stimulus-related modulation in firing rate increased significantly. After learning, one-third of neurons (N=368) were active during the delay period and all of these neurons fired more at either the short or long delays. At the population level, strong temporal correlations between delay-related neurons were observed early, but not late, in the delay period. These effects were not observed in the motor cortex. Our results suggest that medial prefrontal cortex is critical for the temporal control of action because it accounts for the expected timing of trigger stimuli. As neurons in mPFC are known to have massive descending projections to limbic, autonomic, and monoaminergic centers, we suggest that mPFC achieves temporal control over behavior by influencing subcortical systems involved in controlling attention, vigilance, and motivation.

11:10 – 11:35

PERIOD BASIN OF ENTRAINMENT FOR UNINTENTIONAL VISUAL COORDINATION

Stacy Lopresti-Goodman1, Michael J. Richardson2, Paula L. Silva1, and Richard C. Schmidt3 (1University of Connecticut; 2Colby College; 3College of the Holy Cross)

E-mail: stacyloprestigoodman@gmail.com

 

Previous research has demonstrated that a person’s rhythmic movements can become unintentionally entrained to the rhythmic movements of another person or of an environmental event. There are indications, however, that in both cases the likelihood of entrainment depends upon the difference between the independent or uncoupled periods of the two rhythms. The range of period differences over which unintentional person-environment visual coordination might occur was examined in two experiments. Individuals were instructed to swing a wrist-pendulum at a self-selected period while simultaneously reading aloud letters that flashed on a visually oscillating stimulus that was projected on a large screen. We directly manipulated the period of the visually oscillating stimulus with respect to the participant’s natural period of movement and, thus, precisely controlled the range of period differences that the participants experienced. Cross-spectral coherence analysis and the distribution of continuous relative phase revealed visual entrainment up to but not exceeding a 15% difference between a participant’s preferred period and the experimenter-determined period of the environmental stimulus. These findings extend the dynamical systems perspective on person-environment coupling (they reveal loss of attractors, a saddle-node bifurcation) and highlight the significant role that period differences play in the emergence of unintentional coordination.

 

11:35 – 12:00

COMPATIBILITY OF MOTION FACILITATES VISUOMOTOR SYNCHRONIZATION

Michael Hove (Cornell University)

E-mail: mjh88@cornell.edu

 

Prior research indicates that synchronized tapping performance is far worse with flashing visual stimuli than with auditory stimuli. This difference may reflect an auditory advantage for processing temporal information, while visual processing may have the advantage with spatial information. Two finger-tapping experiments explored whether adding a spatial component, compatible or incompatible with the tapping action, can improve visuomotor synchronization performance over purely temporal flashing stimuli at various tempi. In Experiment 1, synchronization success rates increased dramatically for spatiotemporal sequences of both geometric and biological forms over flashing sequences. In Experiment 2, synchronization performance was best when target sequences and movements were directionally compatible (i.e. simultaneously down), followed by action-neutral stimuli, and was poorest for incompatible moving stimuli (upward target/downward movement) and flashing target sequences.  Additionally, I will briefly discuss work that uses moving metronomes to investigate interpersonal synchronization and affiliation. 

 

12:00 – 1:00 Lunch (provided)

1:00 – 1:15
WHY IS DELAYED AUDITORY FEEDBACK DISRUPTIVE? ASSESSING THE ROLE OF INTERNAL TIMEKEEPERS, MOVEMENT KINEMATICS, AND SEQUENTIAL RETRIEVAL

Peter Q. Pfordresher (University at Buffalo, SUNY)

E-mail: pqp@buffalo.edu

 

It has been known for some time that the presentation of delayed auditory feedback (DAF) during the production of a complex sequence (such as speech or music) profoundly disrupts the timing of production. I will summarize research that addresses putative causes for this disruption. In previous research, I have found that the relative timing of feedback onsets best predicts the disruption of timing for both isochronous tapping and for the production of melodic sequences on a keyboard. At the same time, there were subtle differences between these conditions with respect to the degree of disruption at different phase relationships. However, the sequential structure of feedback events (i.e., monotone versus melody) and the nature of the movement (tapping versus melody production) covaried in that study. In more recent research I manipulated movement type and feedback structure independently. All conditions demonstrated a tendency for inter-response intervals (IRIs) to speed up or slow down as an approximately sinusoidal function of the relative phase of feedback onsets within IRIs. IRIs were longest for delays of 33% relative phase and shortest for delays of 88% relative phase. At the same time, I also found a congruity effect such that disruption was maximal when the structure of feedback matched the movement condition (e.g., melody production with melodic feedback). Congruity effects were stronger for melody production than for isochronous tapping. Taken together, results suggest that DAF disrupts a common timekeeping mechanism that is sensitive to links between the sequence of planned actions and the perceived sequence. Thus, some combination of common and sequence-specific timekeeping is likely.

 

1:25 – 1:50
ATTENDING TO SYNCOPATED RHYTHMS: AN FMRI STUDY
Edward W. Large, Heather Chapin, and Theodore P. Zanto (Florida Atlantic University)
E -mail: large@ccs.fau.edu

 

The aim of this study was to explore neural activation related to rhythmic auditory attending. According to the dynamic attending hypothesis, attending temporally coordinates neural processes with external rhythmic events. Dynamic attending facilitates event perception, and is thought to enable memory coding of rhythmic patterns. A rhythm reproduction task was used to investigate the role of attending in perceiving, remembering, and reproducing complex rhythmic patterns. Auditory stimuli were ten syncopated patterns in which half of the auditory events fell on metrically weak beats. We manipulated attention in two experimental conditions. In the attend condition, subjects were asked to attend to a repeated rhythmic pattern, mentally rehearse the pattern, and reproduce the pattern by tapping on a MIDI drum pad. In the ignore condition, participants engaged in a verbal memory task. The distracter task involved studying a visually presented list of words, remembering the words during a retention interval, and verbally recalling the words. In both attend and ignore conditions, the rhythmic auditory stimulus, and the visual stimulus were presented simultaneously. Brain activation (BOLD) was imaged using a 3T GE MRI system. Using a sparse sampling methodology, images were collected every 12 seconds, dividing the experiment into four phases: acquire (three pattern presentation cycles), attend (three additional presentation cycles), rehearse (three cycles), and reproduce (three cycles). The initial acquisition phase was required for beat induction and pattern acquisition, while in the next three cycles participants could attend and memorize the pattern. We tested whether participants successfully attended (or ignored) the pattern by examining the reproductions. If patterns were reproduced correctly (or more than three words were correctly reported) we concluded that the participant had successfully attended (or ignored) the rhythmic pattern, and these trials were analyzed.  Attending, compared to ignoring the rhythms resulted in increased activity in bilateral anterior cingulate (BA 24 and 32), and medial frontal gyrus (BA 9).  Anterior cingulate and prefrontal areas have been previously implicated in attention and expectancy, and appear here to be related specifically to rhythmic attending. Interestingly, attending also resulted in similar anterior cingulate and prefrontal activations, as compared with acquisition of rhythmic patterns.  Thus, activity in these areas increases after initial beat induction and pattern acquisition. Compared with rest, rhythmic attending recruited activity in right basal ganglia and SMA, areas known to play a role in motor behavior. Implications for the theory of dynamic attending are discussed.

 

1:50 – 2:15

TEMPORALLY SELECTIVE ATTENTION MODULATES EARLY AUDITORY PROCESSING: EVENT-RELATED POTENTIAL EVIDENCE

Lisa Sanders (University of Massachusetts, Amherst)

E-mail: lsanders@psych.umass.edu

 

Selective attention provides a mechanism by which people preferentially process subsets of stimuli when faced with overwhelming amounts of information.  For example, spatially selective attention is important for perceiving complex visual scenes in which multiple objects are presented simultaneously. Spatially selective attention is also important for auditory perception when sounds are presented from multiple locations.  However, a more common auditory perceptual problem arises from the rapidly changing nature of sounds including speech and music. When presented with complex, rapidly changing auditory information, listeners may need to selectively attend to specific times rather than locations.  We present evidence that listeners can direct selective attention to time points and that doing so improves target detection, affects baseline neural activity preceding stimulus presentation, and modulates auditory evoked potentials at a perceptually early stage.  These data suggest listeners may use temporally selective attention to preferentially process the most relevant segments in auditory streams. 

 

2:15 – 2:40

LONG-LASTING EFFECTS OF TEMPORAL CONTEXT IN AUDITORY AND VISUAL GROUPING
Joel S. Snyder1,2, Olivia L. Carter1, Suh-Kyung Lee3, Erin E. Hannon1, Nava
Rubin4, Ken Nakayama1, and Claude Alain5 (1Harvard University; 2VA Boston Healthcare System/Harvard Medical School; 3Harvard-MIT Health Sciences and Technology; 4New York University; 5Rotman Research Institute)

E-mail: joel_snyder@hms.harvard.edu

 

Perceptual organization in naturalistic settings typically occurs within a rich context that includes the individual’s recent history of stimulation and perceptual experiences. We carried out a series of experiments examining the influence of preceding context on auditory stream segregation (or “streaming”) and perception of moving plaids. For the streaming paradigm, we presented low tones (A), high tones (B), and silences (-) in a repeating ABA- pattern that could be perceived as a coherent stream of tones with a galloping rhythm (ABA-ABA-…), or two segregated streams of tones with metronome rhythms (A-A-A-A-… and B---B---…). For the moving plaid paradigm, we presented two superimposed orthogonal gratings moving behind an aperture that could be perceived as a coherently moving plaid or two segregated gratings moving past each other. On each trial, we randomly selected the frequency separation between A and B tones (delta-f) or the angle difference between each grating and the direction of coherent motion (alpha). Larger values of delta-f and alpha led to more perception of two segregated objects. However, larger delta-f and alpha on the previous trial had the opposite effect on the current trial, leading to less perception of segregation. This contrastive context effect lasted for several seconds in both the auditory and visual paradigms, suggesting a long-lasting form of sensory or perceptual adaptation. In the auditory paradigm, the same context effect was observed when the previous trial was composed of tones from a different frequency range compared to the current trial, suggesting adaptation of delta-f detectors. This effect was not due to perceptual adaptation because simply perceiving two streams on the previous trial did not cause less perception of one stream on the current trial. In the visual paradigm, the context effect was partially present when the coherent motions of the previous and current trials were in opposite directions (180° shift), but was absent when the coherent motions of the previous and current trials were in orthogonal directions (90° shift), suggesting adaptation of motion detectors that respond to movement along an axis in either direction in addition to direction-selective motion detectors. These results demonstrate the presence of long-lasting contrastive effects of temporal context in both auditory and visual perception that likely operate through adaptation of modality-specific stimulus feature detectors.

 

2:40 – 3:10 Coffee break

3:10 – 3:35
THE ROLE OF IMPUTED VELOCITY IN THE AUDITORY KAPPA EFFECTMolly J. Henry and J. Devin McAuley (Bowling Green State University)
E
-mail: mjhenry@bgnet.bgsu.edu

Lawful movement trajectories of objects in our everyday environment afford predictions about when an object will be where. The visual kappa effect is a demonstration that deviations from expected stimulus spacing, based on a movement trajectory with implied constant velocity (Ds/Dt), tend to produce systematic distortions in perceived stimulus timing. Specifically, altering the spacing of two adjacent sequence elements so that they are closer (or farther apart) in space than expected tends to shorten (or lengthen) the perceived duration between the two elements. One factor that mediates the strength of the visual kappa effect is sequence velocity. Within a restricted range, sequences with a faster implied velocity tend to produce a larger kappa effect (Jones & Huang, 1982). The present study examined the possibility that implied frequency velocity (Df/Dt) similarly modulates the strength of the auditory kappa effect. Participants judged the timing of a target tone embedded in an ascending three tone sequence (a kappa cell), while ignoring changes in target tone frequency.  Implied frequency velocity of each sequence was varied as a between subjects variable and could take on one of three values (4 ST / 800 ms, 4 ST / 500 ms, 4 ST / 364 ms). Consistent with visual kappa findings, the auditory kappa effect was larger for kappa-cell sequences that implied a faster frequency velocity (e.g., 4 ST / 364 ms) than for kappa-cell sequences that implied a slower frequency velocity (e.g., 4 ST / 800 ms). The magnitude of the kappa effect was quantified through fits to an imputed velocity model proposed by Jones and Huang (1982). Findings will be interpreted in the context of an auditory motion hypothesis (MacKenzie & Jones, 2005).

 

3:35 – 4:00

CONTEXTUAL EFFECTS ON THE PERCEPTION OF DURATION IN SPEECH AND NON-SPEECH

John Kingston, Shigeto Kawahara, Della Chambless, Daniel Mash, and Eve

Brenner-Alsop (University of Massachusetts, Amherst)

E-mail: jkingston@linguist.umass.edu

 

In this talk, we report the results of experiments designed to test the competing predictions of direct realist vs auditorist models and of autonomous vs interactive models of speech perception. We compared Japanese, Norwegian, Italian, and English listeners’ identification and discrimination of speech stimuli in which the durations of a vowel and a following consonant were varied orthogonally.  Listeners with the same language backgrounds also identified non-speech analogues constructed by replacing the vowels with anharmonic sine wave complexes. Vowel duration covaries directly with following consonant duration in Japanese but inversely in Norwegian, Italian, and English, so these experiments test the top-down application of linguistic experience in these tasks. The non-speech analogues were used to test the hypothesis that durational contrasts transforms the acoustic durations of the vowel and consonants intervals (or their analogues) as they pass through the auditory system. Japanese listeners identified the consonant intervals in the speech stimuli as long more often after longer vowels, while Norwegian, and Italian listeners instead did so more often after short vowels.  These biases reflect these listeners’ linguistic experience.  English listeners resembled Japanese listeners despite the fact that vowel and following consonant durations covary inversely in English pronunciation.  Listeners’ identification of the non-speech stimuli and their discrimination of both the speech and non-speech stimuli did not differ as a function of their linguistic experience, but instead appear to reflect their having added the durations of the two intervals in responding to these stimuli and tasks. We will also argue that the addition of the durations of successive intervals is a post-perceptual rather than auditory process.  These findings provide no evidence that the percept of the duration of an interval contrasts with that of a neighboring interval and also indicate that the scope of the top-down application of linguistic knowledge is limited.  They thus provide no direct support for an auditorist model of speech perception, but do support an autonomous model of speech perception over an interactive one.

 

4:00 – 4:25

EFFECTS OF ATTENTIONAL DEMANDS AND EVENT STRUCTURE ON THE PSYCHOPHYSICAL SCALING OF PROSPECTIVE AND RETROSPECTIVE TIME JUDGMENTS

Scott W. Brown and Denise K. Rowden-Tibbetts (University of Southern Maine)

e-mail: swbrown@usm.maine.edu

 

Subjects listened to four tape-recorded prose passages ranging in duration from 18 to 48 seconds, and then provided either prospective or retrospective verbal time estimates for each passage.  The experimental design also included two levels of mental workload and two levels of event structure.  The workload manipulation included a control condition involving no additional task requirements, and a detection condition in which subjects tried to detect certain target words in the passages.  As for event structure, the passages were manipulated to produce a coherent version consisting of standard English prose, and an incoherent version in which the passages were syntactically correct but semantically meaningless.  Analyses of power functions relating perceived time and physical time revealed that increased workload produced flatter slopes and larger y-intercepts for prospective judgments but had no effect on retrospective judgments.  However, incoherent structure flattened slopes and increased y-intercepts for both types of time judgments.  Target detection performance was worse under prospective (versus retrospective) and incoherent (versus coherent) conditions.  These findings point to the importance of attentional processes in shaping time perception.

 

4:25 – 4:50

TIME ESTIMATION EMBEDDED IN A GENERAL COGNITIVE ARCHITECTURE

Niels Taatgen1,2, Hedderik van Rijn1, and John Anderson1 (1Carnegie-Mellon University and 2University of Groningen)

e-mail: taatgen@cmu.edu

 

Time estimation plays a major role in many cognitive tasks, but is usually studied in relative isolation. As a consequence, many general characteristics of cognition like attention and learning have been incorporated in theories of time estimation specifically tailored to explain particular empirical findings. For example, effects of attention are often explained by an attentional gate that is part of the time estimation model. In my talk I will show that a more general theory of attention (a central-bottleneck theory) can explain the existing data, and has made successful predictions for new experiments. The theory is explicated in a model based on the ACT-R cognitive architecture, which allows detailed predictions of empirical data.

 

 

5:00 – 6:20       Drinks at Bar (254 Crown Street)

 

6:30 – 8:30       Dinner at Thali (4 Orange Street)