Orienting to a novel event is a rapid shift in attention to a change in one’s surroundings that appears to be a fundamental biological mechanism for survival and essentially functions as a “what is it” detector. Orienting appears to play a central role in human learning and development, as it facilitates adaptation to an ever-changing environment. Thus, orienting can be viewed as an allocational mechanism in which attention sifts through the complex multi-sensory world and selects relevant stimuli for further processing. The selection of stimuli for further processing has implications for what will be encoded into memories and how strong those memory traces will be. The ability to differentiate between relevant and irrelevant input, to inhibit the processing of irrelevant stimuli, and to sustain attention requires control, and inhibitory processes that improve with age.
Advances in data collection technologies in neuroscience has resulted in a deluge of high-quality data that needs to be analyzed, and presented to the experimentalist in a meaningful way. Usually the “data analysis and visualization”-pipeline is built from scratch for each new experiment resulting in a significant amount of code duplication and wasted effort in rebuilding the analysis tools. There is a growing need for a unified system to automate much of the repetitive tasks and aid biologists in understanding their data more efficiently.
Our goal is to use deep learning networks to understand which neurons in the brain encode fine motor movements in mice. We collected large datasets entailing calcium imaging data of active neurons and high-resolution videos when mice perform motor tasks. We want to use recent advances in deep learning to (1) estimate the poses of mouse body parts at a high spatiotemporal resolution (2) extract behaviorally-relevant information and (3) align them with neural activity data. Behavioral video analysis is made possible by transfer learning, the ability to take a network that was trained on a task with a large supervised dataset and utilize it on a small supervised dataset. This has been used e.g. in a human pose–estimation algorithm called DeeperCut. Recently, such algorithms were tailored for use in the laboratory in a Python-based toolbox known as DeepLabCut, providing a tool for high-throughput behavioral video analysis.
Decoding behavioral signifiers for choice and memory can have far reaching implications for understanding actions and identifying disease. We use a four arm maze where we are able to observe choices and infer memory in mice, but have access to very few pre-determined behavioral signifiers. Several recent publications implemented computer vision to extract a variety of previously unreachable aspects of behavioral analysis, including animal pose estimation (Mathis et al., 2018) and distinguishable internal states (Calhoun et al., 2019). These descriptions allowed for the identification and characterization of dynamics, which then revealed an unprecedented richness to the behaviors that determine decision making. Applying such computational approaches to examine behavior in our maze in the context of behaviors that have been validated to measure choice and memory can reveal dimensions of behavior that predict or even determine these psychological constructs. DSI scholars would use pose estimation analysis to evaluate behavioral signifiers for choice and memory and relate it to our real time concurrent measures of neural activity and transmitter release. The students would also have opportunity to examine the effect of disease models known to impair performance on our maze task on any identified signifier.
All complex behaviors require animals to coordinate their perception and actions. To successfully achieve a goal, a decision maker (DM; be it a human, animal, or artificial agent) must determine which action to take and, faced with much more information than she can fully process, must decide which source of information to consult to best guide that action. But in contrast with natural tasks, traditional research has focused primarily on action selection but eschewed the process of information demand. We aim to fill this gap by investigating the factors that motivate people to become curious and seek information. We are collecting behavioral data from a large sample of participants on a battery of online tasks testing various aspects of curiosity, and seek a DSI scholar who can quantitatively analyze the data. The scholar will be supervised by two co-PIs: Jacqueline Gottlieb, in Columbia’s Neuroscience Department and Zuckerman Institute, and Vince Dorie, in the DSI. The scholar will obtain training with advanced data analytic methods and the opportunity to co-author what is expected to be a high impact paper with interdisciplinary appeal in economics, neuroscience, and psychology.
We are constantly exposed to inputs from the outside world, but we do not perceive everything we are exposed to. Some inputs are rather weak: we might perceive them at one point in time, but not at another. The state of our brains right before we receive such sensory inputs influences whether or not we perceive them. Brain oscillations are proposed to play a key role in setting these brain states; however, how exactly these brain rhythms influence our perception remains a topic of active research.
Project: analyze behavior of Siamese fighting fish (Betta splendens) as part of a collaboration between the Bendesky and Cunningham labs of the Zuckerman Institute (NeuroTheory Center)