Sandra Nestler

Dr. rer. nat.

Postdoctoral Fellow with Prof. Omri Barak at Technion – Israel Institute of Technology

Sandra Nestler

Latest Research

Currently ongoing and planned projects.

Multi-Task learning

Human behavior is closely linked to our ability to solve multiple tasks, which serve as basic building blocks for more complex behaviors. However, prevailing neuroscientific research often focuses on single-task learning to examine individual phenomena in detail. Yet, understanding joint effects between tasks is crucial as they provide insights into currently underexplored areas like resource sharing, task interference, and how a learned task, in form of its neural representation, is retained in the network throughout further development.

In this project, we therefore investigate multi-task learning, particularly how learning one task influences the representation of another. There are two features of neural networks that may underlie learning: The network structure in form of its connectivity and the neural activity propagating along it. We study both alongside one another by measuring the neural activity and approximating the connection weights with correlations. By bridging theoretical frameworks with empirical observations, this research aims to advance our understanding of neural dynamics in multi-task environments.

Manifold learning with Normalizing Flows

Despite the large number of active neurons in the cortex, the activity of neural populations for different brain regions is expected to live on a low-dimensional manifold. Variants of principal component analysis (PCA) are frequently employed to estimate this manifold. However, these methods are limited by the assumption that the data conforms to a Gaussian distribution, neglecting additional features such as the curvature of the manifold. Consequently, their performance as generative models tends to be subpar.

To fully learn the statistics of neural activity and to generate artificial samples, we use Normalizing Flows. These neural networks learn a dimension-preserving estimator of the probability distribution of the data. They differ from other generative networks by their simplicity and by their ability to compute the likelihood exactly.

Our adaptation of NFs focuses on distinguishing between relevant (in manifold) and noise dimensions (out of manifold). Our adaptation allows us to estimate the dimensionality of the neural manifold. As every layer is a bijective mapping, the network can describe the manifold without losing information – a distinctive advantage of NFs.

Recent publications

More publications on my Publication page.

Learning Interacting Theories from Data

Claudia Merger, Alexandre René, Kirsten Fischer, Peter Bouss, Sandra Nestler, David Dahmen, Carsten Honerkamp, and Moritz Helias

Phys. Rev. X 13, 041033

2023

Experience

Postdoctoral Fellow at Technion, Haifa, Israel

2023 – ongoing

In collaboration with Omri Barak (Technion), Hadas Benisty (Technion) and Simon Musall (RWTH Aachen).

Doctoral Studies at Jülich Research Centre, Jülich, Germany

2019-2023​

Under supervision of Moritz Helias at INM-6 / IAS-6
Thesis title: A statistical perspective on learning of time series in neural networks

M.Sc. Physics at RWTH Aachen, Aachen, Germany

2016-2019​

Master Thesis with Moritz Helias
Thesis title: Joint Optimization Of Input And Output Projections In Reservoir Computing

B.Sc. Physics at Bremen University, Bremen, Germany

2013-2016​

Bachelor Thesis with Stefan Bornholdt
Thesis title: Adapting the SALI/GALI Method to Kauffman Networks