I am an Assistant Professor of Psychology with a secondary appointment in the Department of Statistics and Data Science. My research aims to elucidate the biological computations underlying how we see, reason about, and interact with our physical environment. How does perception transform raw sensory signals arriving at our sensory organs into things like objects and people, into things that we can think about? This is the key question that drives our research, which we tackle primarily with computational modeling that brings together a diverse range of approaches including probabilistic modeling, simulation engines (including graphics and physics engines), and advanced approximate Bayesian inference (including deep neural networks, sequential importance samplers, approximate bayesian computation methods, and their hybrids). We test these models empirically in behavioral and neural experiments to give a unified account of neural function, cognitive processes, and behavior.
Humans usually have vivid visual intuitions about how soft objects (e.g., liquids, fabrics) respond to external forces when deciding how to interact with them. For example, as we reach to pick up a scarf, we have already made predictions about its weight and softness, and we plan our grip accordingly. How do we achieve this? My primary research goal is to discover the computational strategies of how humans perceive and reason about the physical object properties of soft objects. An equally important goal is to build a bridge between human cognition and artificial intelligence. Website
As a child, I watched allot of cartoons. Some had people, others animals, sometimes even aliens. Both children and adults naturally enjoy this medium, although the world of cartoons often violates what we see as natural in our own world. I study the underlying computational substrates that allow humans to bridge the gap between 2D drawings on a screen to 3D, physical scenes with intentional agents and interactive devices.
I am a Postbac Research Associate in Psychology helping with research at the lab. My interests are at the intersection of artificial intelligence and computational cognitive neuroscience, and I am particularly interested in applying knowledge about how humans learn and understand the world to AI.
Everyday activities like washing the dishes, playing soccer or riding a bike require an intuitive understanding of not only what individual objects are and how they behave in the physical world, but also what other people intend to do next. I want to reverse-engineer how the underlying neurocognitive components of this intuition, e.g. perceiving scenes, objects, faces and bodies, develop and how they are computed, represented, and composed in the mind and brain. Guided by psychophysics and neurophysiology, and building on recent advances in Bayesian modeling and deep learning, I envision building holistic and mechanistic insights into the core aspects of human and animal intelligence and its development.