学术活动

著名语言学家Louis Goldstein讲座通知

发布时间:2016-03-08  

著名语言学家Louis Goldstein讲座通知

学术讲座题目:
Articulatory Phonology: A dynamical theory of phonological representation and the dance of the tongue
演讲人: Louis Goldstein 教授
         美国南加州大学教授、国际著名语言学家
         Articulatory Phonology理论创始人

时间: 2016年3月23日(周三)下午 1:30 – 3:00
地点: 上海交通大学外语学院(闵行校区) 杨咏曼楼111号
联系人: 丁红卫

演讲人介绍:
Louis Goldstein received his PhD in linguistics in 1977 from UCLA, and was a student of Peter Ladefoged and Vicky Fromkin. After post-docs at MIT and at the Instituut voor Perceptie Onderzodek (Institute for Perception Research) in Eindhoven, Netherlands, he joined the faculty of Yale University in 1980, beginning as an Assistant Professor. At the same time, he began as a research scientist at Haskins Laboratories, with which he is still affiliated today. In 2007, he left Yale and joined the faculty at the University of Southern California (USC), where continues to be Professor of Linguistics.

Goldstein is best known for his work with Catherine Browman, developing the theory of articulatory phonology (see abstract above for summary), that proposes that the primitives of speech and phonology are articulatory gestures, modeled using dynamical systems. The theory has informed his and others’ work in speech production, speech perception, phonological theory, phonological development, L2 acquisition, reading, automatic speech recognition, speech planning and prosody. He has over the years overseen the development of a computer simulation of the gestural model of speech, called TaDA (Task Dynamic Application). Over the last ten years, he has worked on developing a model of speech planning and syllable structure using employing coupled oscillators, in collaboration with Elliiot Saltzman and Hosung Nam. Some of the predictions of this model have been tested on languages as diverse as Romanian, Italian, French, Polish, Georgian, Tashlhiyt Berber, Chinese, and Moroccan 
Arabic. Since arriving at USC in 2007, he has been an active participant in the SPAN (Speech production and Articulation kNowledge) group, headed by Shri Narayanan. His work here has involved developing experiments and methods that employ real-time MRI of the vocal tract to test hypotheses about the gestural composition of speech.

演讲内容提要:
A classic problem in understanding the sound structure of human languages is the apparent mismatch between the sequence of discrete phonological units into which words be decomposed and the continuous and variable movements of the vocal tract articulators (and their resulting sound properties) that characterize speaking in real time. Articulatory phonology (Browman & Goldstein, 1986; 1992) was originally developed to provide one possible solution to this apparent mismatch, by viewing speech as a dance performed by the vocal tract articulators. The dance can be decomposed into steps, or articulatory gestures. Each gesture can be represented as a task-dynamical system (Saltzman & Munhall, 1987) that produces a constriction within the vocal tract: its parameters are fixed during the time the gesture is being produced, but continuous motion of the articulators lawfully results. In this talk, I will present a brief tutorial on dynamical systems and how they can be applied to 
modeling phonological gestures. This will be followed by presentation of some recent results showing how gestures can be recovered from the acoustic signal, and how doing so can lead to improved speech recognition performance.

As we produce words and sentences, the gestures that compose them must be coordinated appropriately in time, according to the dance of the particular language. This coordination needs to be both stable (the pattern for a particular word must be reliably reproduced whenever that word is spoken) and flexible (the pattern can be stretched or shrunk in real time to accommodate different prosodic contexts, resulting in temporal variability). The coupled oscillator model of speech of speech planning (Goldstein, Byrd & Saltzman, 2006; Nam, Goldstein & Saltzman 2009) attempts to model this coordination by hypothesizing that the initiation of a gesture is triggered by a planning clock and that the ensemble of clocks for all the gestures that compose a syllable are coupled to another in distinct modes (in-phase, anti-phase, other). This model will be presented and it will be shown how it not only can solve the coordination problem, but can contribute to an understanding of syllable 
structure, coordination of tones and segments, speech errors, and certain types of phonological alternations.

Browman, C. P. & Goldstein, L. (1986).  Towards an articulatory phonology.  Phonology Yearbook, 3, 219-252.
Browman, C.P. & Goldstein, L. (1992).  Articulatory Phonology: An overview.  Phonetica, 49, 155-180.
Goldstein, L., Byrd, D., and Saltzman, E. (2006) The role of vocal tract gestural action units in understanding the evolution of phonology.  In M. Arbib (Ed.) From Action to Language: The Mirror Neuron System. Cambridge: Cambridge University Press. pp. 215-249.
Nam, H., Goldstein, L., & Saltzman, E. (2009). Self-organization of syllable structure: a coupled oscillator model.  In F. Pellegrino, E. Marisco, & I. Chitoran, (Eds). Approaches to phonological complexity. Berlin/New York: Mouton de Gruyter. pp. 299-328.
Saltzman, E. L., and Munhall, K. G., (1989). A dynamical approach to gestural patterning in speech production. Ecological Psychology, 1, 333-382.

地址:中国上海东川路800号上海交通大学闵行校区杨咏曼楼

  邮编:200240  网址:http://sfl.sjtu.edu.cn

​​​​​​​ 电话:021-34205664 (党政办公室)  021-34204723(教学科研办公室)

Copyright @ 2017 All Rights Reserved 旧版网站