Upload new images. The image library for this site will open in a new window.
Upload new documents. The document library for this site will open in a new window.
Show web part zones on the page. Web parts can be added to display dynamic content such as calendars or photo galleries.
Choose between different arrangements of page sections. Page layouts can be changed even after content has been added.
Move this whole section down, swapping places with the section below it.
Check for and fix problems in the body text. Text pasted in from other sources may contain malformed HTML which the code cleaner will remove.
Accordion feature turned off, click to turn on.
Accordion featurd turned on, click to turn off.
Change the way the image is cropped for this page layout.
Cycle through size options for this image or video.
Align the media panel to the right/left in this section.
Open the image pane in this body section. Click in the image pane to select an image from the image library.
Open the video pane in this body section. Click in the video pane to embed a video. Click ? for step-by-step instructions.
Remove the image from the media panel. This does not delete the image from the library.
Remove the video from the media panel.
Please join us on Friday, April 12th at Tarabant Student Center (17 West Main Street, Newark, DE. 1976), for a one-day event where young Linguistics and Cognitive Science researchers present their work. We are also very excited to welcome Professor Joe Toscano (Villanova University) as the Keynote speaker for this year's conference, see below for the title and abstract.
Please see the conference program for the details.
Perceiving speech in the moment: What real-time measures of language processing reveal about the information used to recognize spoken words
Dept. of Psychological and Brain Sciences
Research on spoken language comprehension has been central to cognitive science, both historically and in contemporary studies. Indeed, language processing provide a compelling venue for addressing some of the core questions that cognitive scientists are interested in. For instance, classic work in speech perception argued that speech sounds are perceived categorically, offering a framework that was later used to investigate categorization in other perceptual domains. Most of this work focused on listeners' behavioral responses to speech sounds and spoken words. However, such measures provide only a snapshot of the end stage of spoken language comprehension. I will present recent work aimed at more directly investigating the earliest stages of language processing (auditory perception, speech perception, and spoken word recognition). In particular, I will discuss recent neurophsyiological and eye-tracking studies that allow us to measure the time-course of spoken word recognition in real time, including data that can be localized to specific subcortical and cortical areas. Results from these experiments reveal that, contrary to classic descriptions of speech perception, listeners are highly sensitive to continuous acoustic differences in the speech signal, maintain this sensitivity over long time scales spanning multiple words, and rapidly integrate the incoming speech stream with higher-level linguistic representations. Together, these results suggest a radical rethinking of language comprehension, requiring us to develop models that are more flexible in the nature of representations used to process spoken language.
Move this whole section up, swapping places with the section above it.