← Home

Quick answer

AI Summary: Discovers that the natural modularization of LLM representations during training is a key predictor of their alignment with human brain activity.

Claim

Training-Driven Representational Geometry Modularization Predicts Brain Alignment in Language Models

Authors
Yixuan Liu·
Zhiyuan Ma·
Likai Tang·
Runmin Gan·
Xinche Zhang·
Jinhao Li·
Chao Xie·
Sen Song

ABSTRACT

The degree to which Large Language Models (LLMs) align with human brain activity during language processing remains a central question in both AI and neuroscience. We investigate the impact of training dynamics on the 'representational geometry' of LLMs and its correlation with fMRI and MEG recordings. Our results reveal that as models undergo training, their internal representations naturally become more modularized, mimicking the functional specialization observed in the human language network. We demonstrate that this modularization is a strong predictor of brain alignment, suggesting that the pressure to compress linguistic information leads to a convergent representational structure between biological and artificial systems.

Review Snapshot

Explore ratings

4.6
★★★★★
5 ratings
5 star
60%
4 star
40%
3 star
0%
2 star
0%
1 star
0%

Recommendation

100%

recommend this content.

Review this content

Share your opinion to help other learners triage faster.

Write a review

Invite a reviewer

Invite someone by email to share an invited review for Training-Driven Representational Geometry Modularization Predicts Brain Alignment in Language Models.

Author Inquiries

Public questions about this content. Attendemia will route your question to the author. Vote on the most important ones. No guarantee of response.
Post an inquiry
Sort by: Most helpful