New study shows reading is a team-lift as different brain parts work together to predict proficiency

Conceptual illustration of unlocking the brain featuring an unlocked padlock and a human head and brain in profile on a background of neurons.

Results could help devise new teaching methods that encourage this interaction within the brain’s reading network, says UB psychologist and study co-author

Release Date: October 2, 2018 This content is archived.

Print
Chris McNorgan.

Chris McNorgan

“Developmentally, children start to have more cross talk between their sound processing areas and visual processing areas. They’re mutually reinforcing each other. If they’re not getting this input then children are having difficulty reading.”
Chris McNorgan, assistant professor of psychology
University at Buffalo

BUFFALO, N.Y. – Here’s a sentence. Got it? You just involuntarily transformed symbols on a screen into sounds in your head. Or to put it another way, you read it. That seems simple enough, but moving from what letters look like to what they sound like is a complex multisensory task that requires cooperation among brain areas specialized for visual and auditory processing.

Researchers call this collection of specialized brain regions that map letters to sounds (or phonemes) the reading network. The extent to which these sensory-specific parts of the brain are able to connect as a network, not necessarily anatomically, but functionally, during a child’s development predicts their reading proficiency, according to a new neuroimaging study from the University at Buffalo.

This developmental shift integrates previously segregated parts of the brain, suggesting that changes in reading skill are associated with the nature and degree of these changes to the neural pathways within the reading network. The results could help educators devise teaching methods that encourage more interactive operation of these areas.

“As children learn how to read, the brain rewires itself so that it goes from having one area working on visual matters and another working on auditory matters to the two areas working together as a cohesive unit,” says Chris McNorgan, an assistant professor of psychology at UB and co-author of the research published in a special edition of Frontiers in Psychology focusing on audio-visual processing in reading.

There is no one reading area of the brain. Written language developed roughly 5,000 years ago, far too recently in evolutionary history to have part of the brain dedicated to reading.

“But we have inherited and repurposed specialized brain circuits from our ancient ancestors,” says McNorgan. “They had to recognize objects, so there’s inherently a part of our brain circuitry adapted for identifying the sorts of things necessary for discriminating between letters. The auditory part of the brain is good at recognizing speech sounds.”

Mastering both written and spoken forms of language requires one part of the brain to map to another, the nominally visual with the nominally auditory.

Participants in the study who demonstrated the best development as readers had the greatest change from previously isolated to later interactive areas of the brain.

McNorgan and his colleagues used functional MRI (fMRI), a technology that measures and maps brain activity, for their functional connectivity study.

Anatomical connectivity refers to white matter tracks that physically connect parts of the brain, but functional connectivity (which often tracks anatomical connectivity) considers separate brain areas that seem to become active at the same time when responding to a specific task.

The researchers worked with 19 English-speaking participants, tracking the group at two time points: ages 8-11 and 11-13.

They measured participant reading skill at both time points by assessing their ability to read a series of pseudo-words. A pseudo-words, such as “glarp,” is pronounceable string of letters that isn’t a real word. Pseudo-word reading skill is a useful measure of reading skill because they force participants to use the rules of language to work out the pronunciation rather than rely on previous reading experience for identification.

After reading skill was assessed, participants performed a rhyming judgment task in the fMRI scanner, where they decided whether pairs of sequentially displayed words rhymed, which required them to continuously map written words to sounds.

Using data from the fMRI, McNorgan, doctoral advisor to the study’s lead author Gregory J. Smith, a UB graduate student and co-author, and James R. Booth, a professor at Vanderbilt University, determined which brains areas are connected during the reading task.

Using techniques borrowed from the same branch of mathematics that measure how other types of real-world networks function, the researchers were able to measure cross talk in the patterns of interaction among the regions of the brain comprising the reading network.

“This is fascinating because it falls in nicely with previous research on what’s going on in a child’s mind as they learn to read,” says McNorgan. “Developmentally, children start to have more cross talk between their sound processing areas and visual processing areas. They’re mutually reinforcing each other. If they’re not getting this input then children are having difficulty reading.”

Media Contact Information

Bert Gambini
News Content Manager
Humanities, Economics, Social Sciences, Social Work, Libraries
Tel: 716-645-5334
gambini@buffalo.edu