| Abstract | A software articulatory synthesizer, based upon a model developed by P. Mermeistein [J. Acoust. Soc. Am. 53, 1070-1082 (1973)], has been implemented on a laboratory computers. The synthesizer is designed as a tool for studying the linguistically and perceptually significant aspects of articulatory events. A prominent feature of this system is that it easily permits modification of a limited set of key parameters that control the positions of the major articulators: the lips, jaw, tongue body, tongue tip, velum, and hyoid bone. time-varying control over vocal-tract shape and nasal coupling is possible by straightforward procedure that is similar to key-frame animation: critical vocal-tract configurations are specified, along with excitation and timing information. Articulation then proceeds along a directed path between these key frames within the time script specified by the user. Such a procedure permits a sufficiently fine degree of control over articulator positions and movements. The organization of this system and its present and future applications are discussed. |