Quick answer
AI Summary: Discovers that the natural modularization of LLM representations during training is a key predictor of their alignment with human brain activity.
AI Summary: Discovers that the natural modularization of LLM representations during training is a key predictor of their alignment with human brain activity.
The degree to which Large Language Models (LLMs) align with human brain activity during language processing remains a central question in both AI and neuroscience. We investigate the impact of training dynamics on the 'representational geometry' of LLMs and its correlation with fMRI and MEG recordings. Our results reveal that as models undergo training, their internal representations naturally become more modularized, mimicking the functional specialization observed in the human language network. We demonstrate that this modularization is a strong predictor of brain alignment, suggesting that the pressure to compress linguistic information leads to a convergent representational structure between biological and artificial systems.
Share your opinion to help other learners triage faster.
Write a reviewInvite someone by email to share an invited review for Training-Driven Representational Geometry Modularization Predicts Brain Alignment in Language Models.