med-mastodon.com is one of the many independent Mastodon servers you can use to participate in the fediverse.
Medical community on Mastodon

Administered by:

Server stats:

365
active users

#NeuroAI

1 post1 participant0 posts today

We wrote a little #NeuroAI piece about in-context learning & neural dynamics vs. continual learning & plasticity, both mechanisms to flexibly adapt to changing environments:
arxiv.org/abs/2507.02103

We relate this to non-stationary rule learning & switching tasks with rapid performance jumps.

Feedback welcome!

arXiv logo
arXiv.orgWhat Neuroscience Can Teach AI About Learning in Continuously Changing EnvironmentsModern AI models, such as large language models, are usually trained once on a huge corpus of data, potentially fine-tuned for a specific task, and then deployed with fixed parameters. Their training is costly, slow, and gradual, requiring billions of repetitions. In stark contrast, animals continuously adapt to the ever-changing contingencies in their environments. This is particularly important for social species, where behavioral policies and reward outcomes may frequently change in interaction with peers. The underlying computational processes are often marked by rapid shifts in an animal's behaviour and rather sudden transitions in neuronal population activity. Such computational capacities are of growing importance for AI systems operating in the real world, like those guiding robots or autonomous vehicles, or for agentic AI interacting with humans online. Can AI learn from neuroscience? This Perspective explores this question, integrating the literature on continual and in-context learning in AI with the neuroscience of learning on behavioral tasks with shifting rules, reward probabilities, or outcomes. We will outline an agenda for how specifically insights from neuroscience may inform current developments in AI in this area, and - vice versa - what neuroscience may learn from AI, contributing to the evolving field of NeuroAI.

new brain foundation model study, this time with a graph neural net architecture and fMRI, looking at a range of disorders and downstream tasks. arxiv.org/abs/2506.02044v1 #neuroAI

arXiv.orgA Brain Graph Foundation Model: Pre-Training and Prompt-Tuning for Any Atlas and DisorderAs large language models (LLMs) continue to revolutionize AI research, there is a growing interest in building large-scale brain foundation models to advance neuroscience. While most existing brain foundation models are pre-trained on time-series signals or region-of-interest (ROI) features, we propose a novel graph-based pre-training paradigm for constructing a brain graph foundation model. In this paper, we introduce the Brain Graph Foundation Model, termed BrainGFM, a unified framework that leverages graph contrastive learning and graph masked autoencoders for large-scale fMRI-based pre-training. BrainGFM is pre-trained on a diverse mixture of brain atlases with varying parcellations, significantly expanding the pre-training corpus and enhancing the model's ability to generalize across heterogeneous fMRI-derived brain representations. To support efficient and versatile downstream transfer, we integrate both graph prompts and language prompts into the model design, enabling BrainGFM to flexibly adapt to a wide range of atlases, neurological and psychiatric disorders, and task settings. Furthermore, we employ meta-learning to optimize the graph prompts, facilitating strong generalization to previously unseen disorders under both few-shot and zero-shot learning conditions via language-guided prompting. BrainGFM is pre-trained on 27 neuroimaging datasets spanning 25 common neurological and psychiatric disorders, encompassing 2 types of brain atlases (functional and anatomical) across 8 widely-used parcellations, and covering over 25,000 subjects, 60,000 fMRI scans, and a total of 400,000 graph samples aggregated across all atlases and parcellations. The code is available at: https://github.com/weixinxu666/BrainGFM

#neuroAI preprint alert: studying the convergence of multimodal AI features with brain activity during movie watching. Notably identifies brain areas where multimodal features outperform unimodal stimuli representations. And it uses @cneuromod.ca 's movie10 dataset :) arxiv.org/abs/2505.20027

arXiv.orgMulti-modal brain encoding models for multi-modal stimuliDespite participants engaging in unimodal stimuli, such as watching images or silent videos, recent work has demonstrated that multi-modal Transformer models can predict visual brain activity impressively well, even with incongruent modality representations. This raises the question of how accurately these multi-modal models can predict brain activity when participants are engaged in multi-modal stimuli. As these models grow increasingly popular, their use in studying neural activity provides insights into how our brains respond to such multi-modal naturalistic stimuli, i.e., where it separates and integrates information across modalities through a hierarchy of early sensory regions to higher cognition. We investigate this question by using multiple unimodal and two types of multi-modal models-cross-modal and jointly pretrained-to determine which type of model is more relevant to fMRI brain activity when participants are engaged in watching movies. We observe that both types of multi-modal models show improved alignment in several language and visual regions. This study also helps in identifying which brain regions process unimodal versus multi-modal information. We further investigate the contribution of each modality to multi-modal alignment by carefully removing unimodal features one by one from multi-modal representations, and find that there is additional information beyond the unimodal embeddings that is processed in the visual and language regions. Based on this investigation, we find that while for cross-modal models, their brain alignment is partially attributed to the video modality; for jointly pretrained models, it is partially attributed to both the video and audio modalities. This serves as a strong motivation for the neuroscience community to investigate the interpretability of these models for deepening our understanding of multi-modal information processing in brain.

I'm giving an online talk starting in 15m (as part of UCL's NeuroAI series).

It's on neural architectures and our current line of research trying to figure out what they might be good for (including some philosophy: what might an answer to this question even look like?).

Sign up (free) at this link to get the zoom link:

eventbrite.co.uk/e/ucl-neuroai

EventbriteUCL NeuroAI Talk SeriesA series of NeuroAI themed talks organised by the UCL NeuroAI community. Talks will continue on a monthly basis.

Come along to my (free, online) UCL NeuroAI talk next week on neural architectures. What are they good for? All will finally be revealed and you'll never have to think about that question again afterwards. Yep. Definitely that.

🗓️ Wed 12 Feb 2025
⏰ 2-3pm GMT
ℹ️ Details and registration: eventbrite.co.uk/e/ucl-neuroai

EventbriteUCL NeuroAI Talk SeriesA series of NeuroAI themed talks organised by the UCL NeuroAI community. Talks will continue on a monthly basis.

🧠 Exploring secrets of human vision today at #McGill University! I'll be talking about how our brains achieve efficient visual processing through foveated retinotopy - nature's brilliant solution for high-res central vision.

👉 When: Wednesday 9th of January 2025 at 12 noon.

👉 Where: CRN seminar room, Montreal General Hospital, Livingston Hall, L7-140, with hybrid option.

with Jean-Nicolas JÉRÉMIE and Emmanuel Daucé

📄 Read our findings: arxiv.org/abs/2402.15480

TL;DR: Standard #CNNs naturally mimic human-like visual processing when fed images that match our retina's center-focused mapping. Could this be the key to more efficient AI vision systems?

#ComputationalNeuroscience

#NeuroAI

laurentperrinet.github.io/talk

Proud to have managed to finish a #neuromorphic manuscript, with Chiara De Luca, Mirco Tincani and Elisa Donati just before the end of the year!

It demonstrates the benefits of using #braininspired principles of computation for achieving robust computation across multiple time-scales, despite the inherent variability of the underlying computational substrate (silicon neurons that emulate faithfully biological ones):
A neuromorphic multi-scale approach for heart rate and state detection
doi.org/10.21203/rs.3.rs-57373
#neuromorphic #wearable #neuroai #SpikingNeuralNetwork

doi.orgA neuromorphic multi-scale approach for heart rate and state detectionWith the advent of novel sensor and machine learning technologies, it is becoming possible to develop wearable systems that perform continuous recording and processing of biosignals for health or body state assessment. For example, modern smartwatches can already track physiological functions, in...
Continued thread

(10/n) If you’ve made it this far, you’ll definitely want to check out the full paper. Grab your copy here:
biorxiv.org/content/10.1101/20
📤 Sharing is highly appreciated!
#compneuro #neuroscience #NeuroAI #dynamicalsystems

bioRxiv · From spiking neuronal networks to interpretable dynamics: a diffusion-approximation frameworkModeling and interpreting the complex recurrent dynamics of neuronal spiking activity is essential to understanding how networks implement behavior and cognition. Nonlinear Hawkes process models can capture a large range of spiking dynamics, but remain difficult to interpret, due to their discontinuous and stochastic nature. To address this challenge, we introduce a novel framework based on a piecewise deterministic Markov process representation of the nonlinear Hawkes process (NH-PDMP) followed by a diffusion approximation. We analytically derive stability conditions and dynamical properties of the obtained diffusion processes for single-neuron and network models. We established the accuracy of the diffusion approximation framework by comparing it with exact continuous-time simulations of the original neuronal NH-PDMP models. Our framework offers an analytical and geometric account of the neuronal dynamics repertoire captured by nonlinear Hawkes process models, both for the canonical responses of single-neurons and neuronal-network dynamics, such as winner-take-all and traveling wave phenomena. Applied to human and nonhuman primate recordings of neuronal spiking activity during speech processing and motor tasks, respectively, our approach revealed that task features can be retrieved from the dynamical landscape of the fitted models. The combination of NH-PDMP representations and diffusion approximations thus provides a novel dynamical analysis framework to reveal single-neuron and neuronal-population dynamics directly from models fitted to spiking data. ### Competing Interest Statement The authors have declared no competing interest.

I'm thrilled to announce that I'll be speaking at WT| Wearable Technologies Conference 2024 India, the premier event for wearable technology innovators, creators, and enthusiasts!

📅 When: December 9, 2024

📍 Where: The LaLiT Ashok,  Bengaluru, India

🎙️ Topic: NeuroAI-Enhanced Wearables for Precision Psychiatry and Mental Health

Wearable tech is reshaping industries, from healthcare and fitness to fashion and entertainment. During my session, I'll explore how NeuroAI—a convergence of neuroscience and artificial intelligence and Neuromorphic Computing (NC) chips— enhances the utility of wearables in monitoring, predicting, and supporting mental health. Through a combination of computational psychiatry and human-centered AI design, #NeuroAI -driven wearables hold promise not just for improving diagnostics, but also for delivering scalable mental health support in everyday life. This approach has implications for the future of digital psychiatry and the ethical considerations in data privacy, clinical application, and human-AI collaboration in mental health contexts.

This conference brings together brilliant minds to discuss the future of wearable innovations, and I'm honored to be part of the lineup.

If you're attending, be sure to join my session and say hello!

👉 For more details and to register, visit: lnkd.in/gMnkAQ3j

See you there! 🚀

#WearableTech #Innovation #SpeakingEngagement #IoTInnovation #TechLeaders #FutureOfWearables #WearableTechConferenceINDIA2024