Date: 15- November-2023 @03:00PM-04:00PM

Location: Zoom & DT 2203

Title: Large Language Models and Neuro-Symbolic Architectures: Lessons from Industry

Abstract:

In this review-oriented talk, which will be based on lessons learned during my work in industry as a data scientist and cognitive scientist, I’ll discuss the behaviour of generative deep learning models in terms of how they may connect to two areas of cognitive science – cognitive architectures and language structure.  I’ll start with a brief review of the evolution of AI architectures like Open AIs GPT models, and then consider ways in which discoveries coming out of this space could potentially shed light on some long-standing cognitive science research questions, focusing on language and cognitive architectures. I’ll conclude with a discussion of neuro-symbolic architectures, which are based on the hypothesis that a synthesis of symbolic and sub-symbolic elements can provide the structures necessary for different types of information processing, and which are now being offered as a possible alternative to ‘vanilla’ deep learning architectures.

Bio:

Jen Schellinck has been the principal of Sysabee, a data science company in Ottawa, since 2012. In that time, she and her team have carried out numerous data science projects for government and industry, as well as providing training in data science techniques to a wide range of individuals within this space. She received her PhD in Cognitive Science in 2009 and is currently an adjunct researcher in the Cognitive Science department at Carleton University, connected to Robert West’s lab, where she focuses on studying emergent group dynamics using multi-agent simulations.