I am an Assistant Professor of Psychology. My research aims to elucidate multi-level, integrative accounts of how we see and think that formalize the contents of our percepts and thoughts in concrete engineering terms and make contact with empirical phenomena from all the way cognition to neurobiology. We approach this goal with a distinctly broad computational toolkit, and we test these models in behavioral and neural experiments.
Humans usually have vivid visual intuitions about how soft objects (e.g., liquids, fabrics) respond to external forces when deciding how to interact with them. For example, as we reach to pick up a scarf, we have already made predictions about its weight and softness, and we plan our grip accordingly. How do we achieve this? My primary research goal is to discover the computational strategies of how humans perceive and reason about the physical object properties of soft objects. An equally important goal is to build a bridge between human cognition and artificial intelligence. Website
As a child, I watched allot of cartoons. Some had people, others animals, sometimes even aliens. Both children and adults naturally enjoy this medium, although the world of cartoons often violates what we see as natural in our own world. I study the underlying computational substrates that allow humans to bridge the gap between 2D drawings on a screen to 3D, physical scenes with intentional agents and interactive devices.
While human vision is robust to occlusion, distortion, and lightning variation, machine vision systems are known to be vulnerable to such input perturbations and transformations. How can we fill this gap? Inspired by the robustness of human vision, I'm interested in exploring techniques from 3D vision and object-centric learning to build more robust machine vision systems. I'm also interested in how such approaches can improve data efficiency and generalization of computer vision models.
I am broadly interested in human attention, perception and memory. My research lies at the intersection of computational modeling and behavioral studies of liquid perception. How do people perceive liquids? What are the representation in our mind for liquids? How do people learn the properties of liquids?
At CNCL, we develop hypotheses for how the brain processes incoming sensory information, and we use computational models to explain/verify them. My research primarily focuses on understanding the computations in the brain that transform raw visual signals into representations that we, as humans, perceive as objects, motion, texture etc. The computational models that I use often include a physics simulator/graphical rendering engine within some generative pipeline - no better way of understanding the illusive constructions of the brain than to synthesize its mechanisms as closely as is possible!
Everyday activities like washing the dishes, playing soccer or riding a bike require an intuitive understanding of not only what individual objects are and how they behave in the physical world, but also what other people intend to do next. I want to reverse-engineer how the underlying neurocognitive components of this intuition, e.g. perceiving scenes, objects, faces and bodies, develop and how they are computed, represented, and composed in the mind and brain. Guided by psychophysics and neurophysiology, and building on recent advances in Bayesian modeling and deep learning, I envision building holistic and mechanistic insights into the core aspects of human and animal intelligence and its development.
I am interested in statistics for cognitive science. At CNCL, we studied the scene perception problem, where we developed a framework to understand the computational architecture of selective processing. This framework enables us to capture two aspects of scene perception: the automaticity of processing navigational affordances and flexible multi-granular geometry representations.
From neurons to circuits to ecosystems and beyond, my journey in neuroscience is a continuation of a lifelong quest to understand complex systems. My work aims to decode the brain's intricate architecture, a biological marvel teeming with complexity and yet remain remarkably stable. Comprised of billions of stochastic, unreliable, and noisy cells, these interconnected networks facilitate all our memories, control our intricate movements, and give rise to every thought and question ever asked. It's not just about understanding the algorithmic magic that transforms lone neurons into awakened minds; it's about contributing to a collective future in understanding the underpinnings of cognition which can foster advancements in psychology, neuroscience, artificial intelligence, robotics, … . As I chart this course, I'm fueled by the belief that just as synchronous firing uplifts individual neurons into emergent systems, our shared research and humanity can uplift the world. When I'm not immersed in this intellectual voyage, you'll find me embracing the great outdoors—sailing, rock climbing, mountain biking, hiking—often with my trusted canine companion, Kestrel, by my side.
The human ability to learn has always fascinated me since I love to learn about various topics, how to speak languages, and how to do well in new activities. Computational cognitive science opens the door to inquiring how we pick up unexplored concepts, vocabulary, or precise movements. In my research at the CNCL, I explore the fundamental cognition of learning how the physical world around us works by modeling the process of perceiving the dynamics of physical scenes and how our minds impose temporal structure on it.
Shannon was a postbac research associate.
Eivinas was a postbac research associate.