The Brandeis CL Seminar series brings speakers from research and industry related to computational linguistics and is open to all. If you’d like to be a speaker or suggest a speaker contact Marie Meteer, mmeteer at brandeis dot com.
The Language Application Grid as a Platform for NLP Research
Wednesday, February 1 at 3pm
The LAPPS Grid project (Vassar, Brandeis, CMU, LDC), which has developed a platform providing access to a vast array of language processing tools and resources for the purposes of research and development in natural language processing (NLP), has recently expanded to enhance its usability by non-technical users such as those in the Digital Humanities community. We provide a live demonstration of LAPPS Grid use, ranging from “from scratch” construction of a workflow using atomic tools to a pre-configured docker image that can be run off-the-shelf on a laptop or in the cloud, for several tasks of relevance to the NLP and DH communities.
Keith Suderman is a Research Assistant with the Department of Computer Science at Vassar College in Poughkeepsie, New York. Keith works full time on the development of the LAPPS Grid API, architecture, and tool integrations.
Lexicography from Scratch: Quantifying meaning descriptions with feature engineering
Professor of Language and Artificial Intelligence
Thursday Oct. 26 at 3:30
Quantification is ubiquitous in natural language: it occurs in every sentence. It occurs whenever a predicate P is applied to a set S of objects, where it gives rise to such questions as (1) To how many members of S is P applied? (2) Is P applied to individual members of S, or to S as a whole, or to certain subsets of S? (3) What is the size of S? (4) How is S determined by lexical, syntactic and contextual information? Moreover, if P is applied to combinations of members from different sets, issues of relative scope arise.
Quantification is a complex phenomenon, both from a semantic point of view and because of the complexity of the relation between the syntax and the semantics of quantification, and has been studied extensively by logicians, linguistics, and computational semanticists. Nowadays it is generally agreed that quantifier expressions in natural language are noun phrases, which is why quantification arises in every sentence.
The International Organization for Standardization ISO has in recent years started to develop annotation schemes for semantic phenomena, both in support of linguistic research in semantics and for building semantically more advanced NLP systems. The ISO-TimeML scheme (ISO 24617-1), based on Pustejovsky’s TimeML, was the first ISO standard that was established in this area; others concern the annotation of dialogue acts, discourse relations, semantic roles, and spatial information. Quantification is currently considered as a next candidate for an ISO standard annotation scheme. In this talk I will discuss some of the issues involved in developing such an annotation scheme, including the definition of an abstract syntax of the annotations, of concrete XML representations, and the semantics of the annotations.
Harry Bunt is professor of Linguistics and Computer Science at Tilburg University, The Netherlands. Before that he worked at Philips Research Labs. He studied physics and mathematics at the University of Utrecht and obtained a doctorate (cum laude) in Linguistics at the University of Amsterdam. His main areas of interest are computational semantics and pragmatics, especially in relation to (spoken) dialogue. He developed a framework for dialogue analysis called Dynamic Interpretation Theory, which has been the basis of an international standard for dialogue annotation (ISO 24617-2).
The Brandeis CL Seminar Series hosts Jibo:
Roberto Pieraccini, Head of Conversational Technologies, Jibo Inc.
Friday, December 1, 3pm, Volen 101
Jibo is a robot that understands speech and sees. He has a moving body that complements his verbal communication and expresses his emotions, cameras and microphones to make sense of the world around him. He detects where sounds come from and can track and recognize people’s faces. He has a display to show images, an eye that follows you, and touch sensors. With this array of technologies, Jibo encompasses the ultimate human-machine interface.
In this talk we will give an overview of the technological complexity we embarked into when, more than 4 years ago, we started the journey of building the first consumer social robot. We will describe some of the solutions we adopted and give a demo of the product that started shipping a few weeks ago. We will conclude with a discussion on the future challenges for short and long-term research.
ABOUT THE SPEAKER:
Roberto Pieraccini, a scientist, technologist, and the author of “The Voice in the Machine,” (MIT Press, 2012) has been at the forefront of speech, language, and machine learning innovation for more than 30 years. He is widely known as a pioneer in the fields of statistical natural language understanding and machine learning for automatic dialog systems, and their practical application to industrial solutions. As a researcher he worked at CSELT (Italy), Bell laboratories, AT&T Labs, and IBM T.J. Watson. He led the dialog technology team at SpeechWorks Int.l, he was the CTO of SpeechCycle, and the CEO of the International Computer Science Institute (ICSI) in Berkeley. He now leads the Conversational Technologies team at Jibo. http://robertopieraccini.com