Fall 2008

From Moving Articulators to Sound Structure

Freshman Honors Seminar V50.0233

Prof. Adamantios Gafos (ag63@nyu.edu), O.H. Tues 1–2:30 and by appt., 726 Broadway

 Major concepts / distinctions (elaboration / exemplification)

Broad structure of cognition (perception, computation, action)

Computation ßà Model = set of rules or equations whose output or variables correspond to measurable quantities

Nature of computation / model = Symbolic, Connectionist, Dynamical

Goodness of models: descriptive adequacy, elegance, predictive power (explanatory adequacy)

Why model behavior?

Allows you to check that your theory does account for a specific range of data.

You can still do theory while being explicit.

Due to complexity of the domain, different pieces of your theory may not be mutually consistent. An explicit model allows you to check this.

Allows you to uncover new issues, constraints or predictions imposed by the principles guiding your model construction. That is, the model becomes a tool for conducting more empirical work.

Representations = re-present sensory inputs to the brain or higher mental levels after stimulus life expires

Components of any phonological theory = a theory of representations + a theory of rules or constraints on the representations (Anderson 1985) Anderson, S. 1985. Phonology in the Twentieth Century. University of Chicago Press.

Phonological representations: flat (SPE-based matrix) vs.  structured (feature geometry)

Phonological representations: static (SPE-based matrix, autosegments) vs. dynamic (Articulatory Phonology, Browman & Goldstein 1986)

Static = Symbolic representations (distinctive features, autosegmental representations, tone, segment-internal structure, skeleton)

Gestural representations (behavioral level, articulatory phonology, time in phonological representations)

Auditory representations (neural level, Guenther's DIVA model, nature of speech targets)

Reduction = “explain the facts of phonology in terms of basic principles of the physical sciences”; (appeal) would end up with fewer irreducible principles than we thought we ought to have at first sight.

Reduction (Type 1: level of description change as in “phonology can be reduced to phonetics”; Type 2: level of description not changed, as in deriving the effects of or replacing one mechanism by an independently needed mechanism as in “spreading can be reduced to correspondence”; ex. Type 1: Ohala, J. (1990). There is no interface between phonetics and phonology: a personal view. Journal of Phonetics, 18, 153-171.

Closure: the higher levels are inextricably linked to the lower levels

Emergence: higher levels exhibit properties that cannot be expressed in the language of the lower level (H. Pattee 1973, S. Kauffman 1995). For phonology, see Sapir, E. 1925. Sound patterns in language. Language 1 (2), 37-51.

Hierarchical organization of cognition (in Type 1 reduction, finding a phonetic basis of a phonological pattern does not preclude existence of higher-level description and representation)

Symbolic computation = representations + processes (algorithms)

Relation between discreteness & continuity

Interfaces or transducers (symbolicßàcontinuous); Fodor & Pylyshyn 1981, Harnad 1990

        Fodor, J. A., & Pylyshyn, Z. W. (1981). How direct is visual perception? Some reflections on Gibson’s ‘ecological approach’. Cognition, 9, 139-196.

        Harnad, S. (1990). The symbol grounding problem. Physica D, 42, 335-346.

Dynamic laws relating order parameters to control parameters

Context effects in cognition

General form of dynamical models

Non-linearity

Fixed points: attractors and repellers

Dynamic stability (preferred modes, patterns are resistant to noise and exhibit small fluctuations around their mean states)

Change: control parameters can result in qualitative changes in the dynamics

Performance = Data = F (Competence) + Noise, F = lawful link between competence and performance

                (‘derivational theory of complexity’: sentences whose derivation involves more computations are more difficult to understand than sentences whose derivation involves fewer computations. Chomsky 1965, Fodor, Bever, & Garrett, 1974. More recent: the grammar is the parser, Phillips 1996; Hawking 2004 on grammatical principles (or as he calls them ‘conventions’) conventionalize domain general (but interacting and competing) performance factors such as efficiency.

                                Chomsky, N. (1965). Aspects of the theory of syntax. Cambridge, MA: MIT Press.

                                Fodor, J. A., Bever, T., & Garrett, M. F. (1974). The psychology of language: An introduction to psycholinguistics and generative grammar. New York: McGraw-Hill.

                                Hawkins, J. A. (2004). Efficiency and Complexity in Grammars. Oxford: Oxford University Press.

Top-down vs. bottom-up information processing

Top-down approaches begin with hypotheses about computational mechanisms (constraint ranking) and then ask how such mechanisms might operate at the behavioral (vocal tract) / neural level (cerebellum, Medina JF, Mauk MD (2000) Computer simulation of cerebellar information processing. Nature Neuroscience 3:1205-1211.)

Bottom-up approaches attempt to explain properties of the system by taking into account known properties of the behavioral level (sensory and motor principles) or the neural level (cellular and synaptic components) as accurately as possible