Search

A Personalized Fingerprint for Attention

Image courtesy of Shutterstock.

In a world where our attention is pulled every which way by strategic advertisements and technicolor screens, our attention may be our most valuable commodity. Our brains have limited cognitive capacity, and it would be impossible to attend to every visual or sensory stimulus we encounter in our daily lives. This means our brains have developed an elaborate system to selectively process the information most relevant to us at any given time. 

But how do we even measure attention? To date, attentional functioning is primarily measured through a flurry of surveys and hours-long questionnaires—a costly and weary process. Given its fluctuations and multi-faceted nature, an individual’s attentional functioning cannot be boiled down to a single number, so researchers have sought a quantifiable, standardized measure of attention.

The Visual Cognitive Neuroscience Lab, led by dean Marvin Chun, set out to answer this question precisely. Using a creative combination of functional magnetic resonance imaging (fMRI) neuroimaging and connectome modeling, Kwangsun Yoo, an associate research scientist at Yale, Monica Rosenberg, an assistant professor of psychology at the University of Chicago, and the rest of the team recently reported their fMRI-based general measure of attention in Nature Human Behaviour.

To cover multiple aspects of human attention, the team collected fMRI data from ninety participants. The researchers measured volunteers’ brain activity while they performed three separate attention-demanding tasks. “Since we aimed to predict general attentional function, we had to use multiple task-based fMRI data,” Yoo said.

Let’s back up a bit. Using data from an fMRI scan, researchers can generate an individual’s whole-brain pattern of functional connectivity or how certain signals in distributed brain regions fluctuate over time. This is called their connectome – almost like a brain fingerprint. Each person’s functional connectome is unique and related to aspects of their abilities and behavior. Their ‘brain fingerprint’ can then be fed into a connectome-based predictive model (CPM) to predict your attentional ability.

“It’s kind of like listening to an orchestra, and all these instruments are going on at the same time, and there’s a certain harmony and rhythm in the way the brain areas become active and inactive at any given time,

 Chun said. “Our model does something as simple as capturing all of that, and converting it into a matrix of numbers, where each cell is a correlation between two brain areas.” 

The team trained nine CPMs on this fMRI correlation data, all of which demonstrated strong predictive powers of participants’ attentional ability across all three tasks. The team could even re-train the CPMs on the mean of normalized performance scores across all three tasks, a ‘common attention factor,’ and use the models to predict overall attention ability rather than task-specific ability.

Now comes a seriously creative part of this paper. Task-based connectomes collected while the participant actively engages in a task have much more predictive power than rest-based connectomes. However, rest scans collected while participants are simply lying still in the MRI scanner by nature are much easier to collect across different research studies and locations reliably. To this point, Yoo introduces connectome-to-connectome state transformation (C2C) modeling, a novel approach that seeks to capture the best of both worlds. C2C modeling can extrapolate from an individual’s rest connectome and accurately generate their attention-task connectome without requiring the participant to engage in a task at all! This model-generated attention-task connectome was even found to improve subsequent behavioral predictions. The novel approach increases the predictive value of rest scans and may free future researchers from the burden of collecting multiple scans or ensuring every task is perfectly standardized across participants.

Finally, the team combined all the above – the common attention factor, C2C transformation modeling, and CPMs – in a general attention model that can encapsulate an individual’s overall attention functioning. Combining C2C and CPM allows researchers to estimate multiple cognitive measures from a single rest scan. Though the measure is far from complete, it certainly has the potential to be used for other mental traits such as intelligence, memory, depression, or anxiety.

“Attention is only a test vehicle,” Chun said, pitching an example of someone who goes to the doctor for a depressive episode: “Instead of having these long wait times and hours-long interviews, can we just put them in a brain scanner and pop out a computerized print-out of what kind of depression they have, what will the best treatment be? How powerful would that be? We’re far away from it, but that’s my dream.”