Justin Su from QPID talks about Machine Learning in Healthcare

Machine Learning Approaches to Evaluate Clinical Evidence Quality

Justin Su
NLP/ML Engineer, QPID
 
Friday, November 10 at 3:00
Volen 101
 
The Data Science team at QPID has conducted a machine learning project to develop an approach that decides which request for a medical procedure are clinically appropriate based on clinical information from the patient’s medical record provided by a human user. In this talk, I will present and compare various machine learning and deep learning models that we have experimented to identify supportive clinical evidence for a given procedure.
Justin Su graduated from the CL MA program at Brandeis in 2017, and joined QPID Health as an NLP/ML Engineer. He does a bit of everything at QPID, which includes software engineering, NLP, machine learning, data science, and baking. 

Harry Bunt: Issues in the semantic annotation of quantification

Harry Bunt

Professor of Language and Artificial Intelligence
Tilburg University

Thursday Oct. 26 at 3:30
Volen 101

Quantification is ubiquitous in natural language: it occurs in every sentence. It occurs whenever a predicate P is applied to a set S of objects, where it gives rise to such questions as (1) To how many members of S is P applied? (2) Is P applied to individual members of S, or to S as a whole, or to certain subsets of S? (3) What is the size of S? (4) How is S determined by lexical, syntactic and contextual information? Moreover, if P is applied to combinations of members from different sets, issues of relative scope arise.

Quantification is a complex phenomenon, both from a semantic point of view and because of the complexity of the relation between the syntax and the semantics of quantification, and has been studied extensively by logicians, linguistics, and computational semanticists. Nowadays it is generally agreed that quantifier expressions in natural language are noun phrases, which is why quantification arises in every sentence.

The International Organization for Standardization ISO has in recent years started to develop annotation schemes for semantic phenomena, both in support of linguistic research in semantics and for building semantically more advanced NLP systems. The ISO-TimeML scheme (ISO 24617-1), based on Pustejovsky’s TimeML, was the first ISO standard that was established in this area; others concern the annotation of dialogue acts, discourse relations, semantic roles, and spatial information. Quantification is currently considered as a next candidate for an ISO standard annotation scheme. In this talk I will discuss some of the issues involved in developing such an annotation scheme, including the definition of an abstract syntax of the annotations, of concrete XML representations, and the semantics of the annotations.

Harry Bunt is professor of Linguistics and Computer Science at Tilburg University, The Netherlands. Before that he worked at Philips Research Labs. He studied physics and mathematics at the University of Utrecht and obtained a doctorate (cum laude) in Linguistics at the University of Amsterdam. His main areas of interest are computational semantics and pragmatics, especially in relation to (spoken) dialogue. He developed a framework for dialogue analysis called Dynamic Interpretation Theory, which has been the basis of an international standard for dialogue annotation (ISO 24617-2).

CL Seminar: Lexicography from Scratch

Lexicography from Scratch: Quantifying meaning descriptions with feature engineering

Orion Montoya 
Friday, October 20 at 3pm

Volen 101

When computational linguistics wishes to engage with the meaning of words, it asks the experts: lexicographers, who analyze evidence of usage and then record judgments in dictionaries, in the form of definitions. A definition is a finely-wrought piece of natural language, whose nuances are as elusive to computational processes as any other unstructured data. Computational linguists nevertheless squeeze as much utility as they can out of dictionaries of every stripe, from Webster’s 1913 to Wordnet. None of these resources had computational analysis of lexical meaning in mind when they were conceived or created. Despite the immense human cognitive effort that went into making them, most lexical resources constrain their computational users to a few simplistic lookup tasks.
If a lexical resource is designed, from its origins, to serve all the diverse human and computational applications for which dictionaries have been repurposed in the digital era, it might yield significant improvements both theoretically and practically. But who wants to make a dictionary from scratch? The theme of the 2017 Electronic Lexicography conference (Leiden, September 19-21: http://elex.link/elex2017/) was “Lexicography From Scratch”. This talk assembles a number of isolated recent innovations in lexicographical practice — often corpus-driven retrofits on to existing dictionary data — and attempts to map out a lexicographical process that would connect them all.
Such a process would yield meaning descriptions that are quantified, linked to corpus data, decomposable into individual semantic factors, and conducive to insightful comparison of lexicalized concepts in pairs and in groups. We describe a cluster-analysis framework that shows promise for automating the fussier parts of this by reducing cognitive loads on the lexical analyst. If aspects of lexical analysis can be automated through feature engineering, we may produce computational models of lexical meaning that are more useful for NLP tasks and more maintanable by lexicographers.
Bio: Orion Montoya graduated from the Brandeis CL MA program in 2017, with the thesis Lexicography as feature engineering: automatic discovery of distinguishing semantic factors for synonyms. Before coming to Brandeis, he spent fifteen years in and around the lexicography industry, computing with lexical data in all of its manifestations: digitizing old print dictionaries, managing lexicographical corpora, linking old lexical data to new corpus data. He also has a BA in Classics from the University of Chicago.

Fidelity

Fidelity is one of the world’s largest financial services firms. Over the years, our commitment to innovation and pioneering approach to customer service has helped us grow our business and expand our reach. As of June 30, 2016, Fidelity administered over $5.4 trillion in total customer assets.  Headquartered in Boston, additional operations centers are staffed with over 18,000 customer service representatives in eleven states and 193 Investor Centers are spread across the United States.  In 2014 Fidelity received the JD Powers award for full-service investor satisfaction.   Fidelity has several strategic projects in cognitive computing to assist representatives in servicing customers and improving our self-service channels.  For more corporate information see jobs.fidelity.com. Fidelity is one of the world’s largest financial services firms. Over the years, our commitment to innovation and pioneering approach to customer service has helped us grow our business and expand our reach. As of June 30, 2016, Fidelity administered over $5.4 trillion in total customer assets.  Headquartered in Boston, additional operations centers are staffed with over 18,000 customer service representatives in eleven states and 193 Investor Centers are spread across the United States.  In 2014 Fidelity received the JD Powers award for full-service investor satisfaction.   Fidelity has several strategic projects in cognitive computing to assist representatives in servicing customers and improving our self-service channels.  For more corporate information see jobs.fidelity.com.

Homesite

Homesite is committed to being the most trusted and valued customer- driven insurance company. Homesite is experiencing outstanding growth. Homesite was built on: Integrity, Respect, and striving for excellence. These values when combined with discipline and focus will lead success. We’re experts in homeowners, renters and condo insurance. It’s all we do. That’s why we’re really good at tailoring all of our products and services to your needs. Here at Homesite, your home is our focus.

The Experimentation Lab was formed in order to foster technology innovation, agility, and rapid development. The Experimentation Lab is a place to establish thought leadership backed by tactical and strategic projects/products that revolutionize the industry. It is an opportunity to work with state of the art technologies and start-ups combined with the business knowledge that comes from established leaders in the insurance industry. Specifically The Lab is looking to incorporate emerging Big Data and Analytics technologies to revolutionize how the insurance industry works. This can be everything from drone and satellite imagery to aid post catastrophe to social media analytics to better understand our customers needs to the Internet of Things and connect home and car technologies. 

Linguamatics

Linguamatics transforms unstructured big data into impactful insights to advance human health and wellbeing. A world leader in deploying innovative natural language processing (NLP)-based text mining for high-value knowledge discovery and decision support, Linguamatics’ solutions are used by top commercial, academic and government organizations, including 18 of the top 20 global pharmaceutical companies, the US Food and Drug Administration (FDA) and US National Cancer Institute, Cancer Research UK, and leading US healthcare organizations.

Adobe

Adobe is changing the world through digital experiences. We give everyone – from emerging artists to global brands – everything they need to design and deliver exceptional digital experiences. Adobe Document Cloud is revolutionizing the way the world works with documents. It”s the newest cloud offering at Adobe, and a very exciting place to be. The Document Cloud combines a collection of online services integrated with Adobe Reader and Adobe Acrobat. Our subscription base is growing rapidly and we are continually rolling out new features and services. We work in small, agile teams with considerable autonomy and we value engineers with technical competence, creativity, flexibility, strong customer focus and an eagerness for learning and collaboration.