BlueLogoFish.png

Home

Established August 2020 at EPFL

Welcome to the Mathis Group!

We work at the intersection of computational neuroscience and machine learning, an area that is sometimes called NeuroAI. Ultimately, we are interested in reverse engineering the algorithms of the brain, in order to figure out how the brain works and to build better artificial intelligence systems. 

We develop machine learning tools for behavioral and neural data analysis and conversely try to learn from the brain to solve challenging machine learning problems such as learning motor skills. Check out some of our research direction below.

We are also passionate about wildlife conservation and are thrilled that our tools can contribute beyond Neuroscience! Check out our Nature Communications Perspectives in machine learning for wildlife conservation with many others!

Machine Learning Tools for Animal Behavior Analysis

We strive to develop tools for the analysis of animal behavior. Behavior is a complex reflection of an animal's goals, state and character. Thus, accurately measuring behavior is crucial for advancing basic neuroscience, as well as the study of various neural and psychiatric disorders. However, measuring behavior (from video) is also a challenging computer vision and machine learning problem. Thus, our work advances machine learning and computer vision to push the state of the art for the analysis of behavior.

Published work in this field includes DeepLabCut, a popular open-source software tool for pose estimation. For action segmentation, check out DLC2action, AmadeusGPT as well as WildCLIP.

Brain-inspired motor skill learning

Watching any expert athlete, it is apparent that brains have mastered to elegantly control our bodies. This is an astonishing feat, especially considering the inherent challenges of slow hardware and the sensory and motor latencies that impede control. Understanding how the brain achieves skilled behavior is one of the core questions of neuroscience that we tackle through Modeling using Reinforcement Learning, and Control Theory. Check out DMAP, Lattice, our winning code for the MyoChallenge at NeurIPS 2022, 2023 and more!

Task-driven models of proprioception & sensorimotor processing

We develop normative theories and models for sensorimotor transformations and learning. Recent work has demonstrated that networks trained on object-recognition tasks provide excellent models for the visual system. Yet, for sensorimotor circuits this fruitful approach is less explored, perhaps due to the lack of datasets like ImageNet. Thus, we explore task-demands, like controlling an arm or learning motor skills, and investigate the emerging representations and computations. 

One key hypothesis is that brain circuits for motor control and learning emerge when optimizing circuit models for ethological behaviors. We test & improve those models in collaboration with experimental labs.

Initially, we started by modeling the proprioceptive system of humans (Sandbrink* & Mamidanna* et al. eLife 2023). In subsequent work, we expanded this framework to test which hypothesis best explains the neural dynamics of proprioceptive units in the brain stem and somatosensory cortex (Marin Vargas& & Bisi* et al. Cell 20224).


Lab picture in May 2023.

Lab picture in April 2022

Group picture in October 2021

The first group picture of the lab taken when we celebrated the accepted Neuron Primer in mid September 2020… -with a few that weren’t around added on the sides (Pauline, Lucas, Alessandro,…) and yet others missing!

The first group picture of the lab taken when we celebrated the accepted Neuron Primer in mid September 2020… -with a few that weren’t around added on the sides (Pauline, Lucas, Alessandro,…) and yet others missing!