Unlock Value Encoding Secrets to Improve Diagnostic Accuracy
Executive Brief
- The News: 3 types of value-coding neurons identified in primate putamen
- Clinical Win: 40 trials enable examination of neural responses encoding value
- Target Specialty: Neurologists studying primate brain function and behavior
Key Data at a Glance
Study Design: Single-unit recording
Task Types: Tactile and visual value reversal tasks
Number of Trials for Adaptation: 4 trials
Types of Value-Coding Neurons: 3 types (tactile-selective, visual-selective, bimodal)
Species: Monkeys
Brain Region: Primate putamen
Unlock Value Encoding Secrets to Improve Diagnostic Accuracy
Convergent processing of tactile and visual values in the primate putamen
To examine how the population of neurons in the primate putamen processes value information from tactile and visual inputs, we trained monkeys to perform both tactile and visual value reversal tasks, as previously described (Fig. 1A, B and Supplementary Fig. S1A)8. In the tactile and visual value reversal tasks (T-VRT and V-VRT), one braille pattern or fractal image was associated with a reward (good), and the other was not (bad). This stimulus-reward contingency was reversed after 40 trials, enabling the examination of neural responses encoding the value while excluding neural responses to the stimuli (Fig. 1C).
In both tasks, the same tactile or visual stimulus was presented twice after the presentation of a square cue indicating finger insertion, enabling the monkeys to experience the stimulus during the first cue presentation (stimulus presentation period) and to predict the reward outcome during the blank delay period before the second cue presentation (Fig. 1B). Consequently, the monkeys inserted their fingers more rapidly into the hole after the second cue presentation when the previously experienced good stimulus was presented compared to the bad stimulus (Fig. 1D). Specifically, the monkeys inserted their fingers more slowly to bad stimuli in V-VRT than T-VRT, while they responded more quickly to good stimuli in T-VRT than V-VRT (Supplementary Fig. S1B). There were more give-up trials (i.e., trials in which the monkeys did not insert their fingers into the hole) when they encountered bad stimuli in V-VRT (Supplementary Fig. S1C). When assessing how quickly the monkeys adapted their behavior following a value reversal, their reaction times shifted rapidly within four trials for both good and bad stimuli in both tasks (Supplementary Fig. S1D, E). Overall, the difference in the reaction time indicates that the monkeys acquired and retained the values associated with the tactile and visual stimuli until the second cue presentation.
Three types of value-coding neurons were identified in the primate putamen through single-unit recording, as previously reported: tactile-selective value neurons, which respond specifically to tactile-related value information; visual-selective value neurons, which encode value information related to visual stimuli; and bimodal value neurons, which encode value information from both tactile and visual modalities (Fig. 1E)8. Bimodal value neurons comprise 43% of all task-related responsive neurons (129/299) and 52% of all value-coding neurons (129/247).
Notably, we observed that bimodal value neurons in the putamen showed value discrimination responses in both T-VRT and V-VRT, but these responses differed during the same periods and often occurred in entirely different periods (Fig. 1F, G and Supplementary Fig. S2A, B). Figure 1F shows an example neuron that exhibited stronger responses to good stimuli than to bad stimuli during the stimulus presentation period in both T-VRT and V-VRT, but its response patterns differed between the two tasks. Moreover, value encoding of the putamen neurons was often distributed across different task periods rather than confined to a single period (Supplementary Table S1). For instance, the neuron in Fig. 1G encoded both tactile and visual values, but it represented each value during different periods: the tactile value was encoded during the blank delay period while the visual value was encoded during the stimulus presentation period.
These bimodal value neurons recorded across all putamen areas exhibited heterogeneity during the processing of modality information: at the single-neuron level, 55% of bimodal value neurons encoded modality information, while 45% did not (Supplementary Fig. S2C). Bimodal value neurons did not show significant differences in the timing of value discrimination, but they exhibited a greater magnitude of value discrimination responses in V-VRT compared to T-VRT (Supplementary Fig. S2D, E).
For a more thorough exploration of how bimodal value neurons encode modality and value, we calculated the pairwise correlation of regression coefficients for each unit (Fig. 1H). These results showed no statistically significant correlation between modality and value responses (p = 0.15, Pearson’s correlation), indicating that these representations are highly separable.
Given that the putamen is known to play a role in motor movements25,32,33, we further examined whether eye, finger, and arm movements could be classified based on the reward value during T-VRT and V-VRT. However, our analysis revealed that the monkeys did not adjust their eye, finger, or arm movements according to either reward value or modality, indicating that bimodal value neurons encode value and modality independently of these motor-related factors (Supplementary Fig. S3). Taken together, although nearly half of value-coding neurons encode both tactile and visual value information, the manner in which that information is encoded has heterogeneous response profiles across different task periods.
The population patterns of bimodal value neurons represent tactile and visual values
Previous studies have suggested that complex neural activities may serve latent functions related to the processing of various types of information, which can be elucidated through population-level analyses34,35,36,37. The diversity in value encoding at the single-neuron level raises a question about the corresponding population-level responses (Fig. 2A): Do the population patterns of these neurons encode value information from tactile and visual inputs into a unified value, or do they separately represent the tactile and visual values? To determine whether bimodal value neurons process each value separately or in a unified manner, we conducted population decoding analyses of bimodal value neurons using three variables: modality, value, and their interaction (Fig. 2B and Supplementary Fig. S4A).
The decoding accuracy for value was at the chance level (50%) before the onset of the stimulus, but it quickly started to increase as the stimulus was presented. In contrast, modality was decoded with nearly 100% accuracy prior to stimulus onset, suggesting that bimodal value neurons process modality information similarly to how contextual information is processed in each task. The decoding accuracy for the interaction between value and modality (Tactile-good value/Tactile-bad value/Visual-good value/Visual-bad value) also increased and reached nearly 100% after stimulus presentation, but the accuracy was already around 50% even before the stimulus was presented (chance level = 25%). Considering that the bimodal value neurons represented the modality from the beginning of the trials, this statistically significant decoding for interaction prior to stimulus onset may be due to the use of modality information. To test this possibility, we constructed a confusion matrix of the decoder testing interaction between value and modality. These results indicate that the decoder differentiated between tactile and visual modalities prior to stimulus onset, and began discriminating values specific to each modality condition after stimulus presentation (Fig. 2C). This suggests that bimodal value neurons process modality as contextual information and selectively discriminate values within each modality condition.
These neural representations of modality and value dynamically changed over time (Fig. 2D). We compared the dynamics of the population activity among four conditions representing the combinations of modality and value in the subspace projected to the first three principal components (PCs) (Fig. 2D and Supplementary Fig. S4B, C). Consistent with the decoding results, the trajectories revealed that the patterns of dynamics in the latent space initially clustered by modality and then diverged according to their value and modality after stimulus presentation. Notably, the trajectories for good values moved nearly in parallel along the same direction, while those for bad values moved in parallel but in the opposite direction, suggesting an abstract representation of value (Fig. 2D). Overall, our data indicate that bimodal value neurons represented value differently depending on its modality, suggesting that the neural population in the putamen maintained the unique feature of modality when processing value components, supporting the divergent model in Fig. 2A.
Dynamic changes of the neural geometry during the processing of value and modality
This distinct process of tactile and visual values at the population level of bimodal value neurons raises a question: Do bimodal value neurons encode value and modality in a structured representation, and can it be generalized through shared features? If multiple variables can be generalized with a shared feature, the corresponding neural geometry forms an ‘abstract representation’, allowing for a reduction of the neural dimensions when processing multiple inputs10,11,38,39.
To investigate this, we measured how well each variable is generalized using the cross-condition generalization performance (CCGP), following previously established methods10. In the CCGP analysis, we determined whether a decoder, trained to identify the value (or modality) in one of two conditions (e.g., the good or bad in tactile condition), could decode the same value (or modality) in a condition not used for the training of the decoder (e.g., good or bad in the visual condition) (Fig. 3A).
Figure 3B illustrates the dynamic change of the CCGPs for value and modality as the trial progresses, aligned with the stimulus presentation (see also Supplementary Fig. S5A, B). The CCGP for modality was initially above the chance level and sustained this level until ~150 ms after stimulus presentation. In contrast, the CCGP for value increased after stimulus onset, surpassing the chance level 230 ms after the stimulus appeared. The CCGP for value continued to rise after stimulus onset and during the delay period, while the CCGP for modality continued to decline during stimulus onset and at the beginning of the delay period.
Overall, the CCGPs for value and modality were initially distinct and then became similar, eventually reversing as the trial progressed. To provide clear verification and visualization of this observation, we divided the time window of the trial into three distinct phases: Phase 1, which encompasses the prestimulus-early stimulus periods, from −200 to 150 ms aligned with the stimulus onset; Phase 2, focusing on stimulus presentation, from 150 to 500 ms; and Phase 3, covering the blank delay period, from 500 to 1000 ms (For further details and the rationale behind this segmentation, refer to the methods section). We also analyzed these three phases by determining which variables (value and modality) were generalized across shared features in each phase (Fig. 3C and Supplementary Fig. S5C). In phase 1, the CCGP for modality was above the chance level, whereas for value it was not (Fig. 3C, left panel). In both phases 2 and 3, the decoding accuracy for the interaction between value and modality exceeded 98%, as analyzed by a traditional linear decoder (Fig. 3C, middle and right panels). However, the extent of the generalized representation dynamically changed as the phase progressed. In phase 2, the CCGPs for both value and modality were above the chance level (Fig. 3C, middle panel). However, in phase 3, the extent of the generalized representation for value increased further, but the extent of the generalized representation for modality decreased to the chance level, as analyzed according to the CCGP (Fig. 3C, right panel).
For more clarification of the representational geometry of bimodal value neurons, we also computed the parallelism score (PS) in each phase, as previously reported, to quantify the degree to which the coding directions are parallel10. If the coding vectors for each variable are nearly parallel, the PS will deviate significantly from 0. Conversely, if the neural representations of each variable are similar to random representations, the PS will be approximately 0, indicating orthogonality between the coding vectors. In this study, we can obtain coding vectors for classifiers trained to classify values as tactile-good and tactile-bad, as well as classifiers trained to classify values as visual-good and visual-bad. We can then calculate the cosine angle between these two coding vectors and define it as the PS for ‘value’. A cosine angle between these two coding vectors close to 1 indicates that the two vectors are parallel.
As shown in the results depicted in Fig. 3C, the PSs for value and modality in all phases were above the chance level, except for the PS for value in phase 1. This suggests that the coding directions for both modality and value are almost parallel to the coding direction for that same variable across conditions after stimulus onset, potentially leading to high decoding accuracy rates and CCGPs in phases 2 and 3. Taken together, our quantification analyses using the CCGP revealed that as the trial progressed toward value-guided movements, the generalized representation for modality decreased, whereas the generalized representation for value increased.
To visualize the geometric architecture of these neural representations, the average population activities for all four possible pairings of value and modality in each phase were projected into a three-dimensional principal component (PC) space (Fig. 3D and Supplementary Movie 1). In phase 1, the population activities clustered according to the modality, showing similar representation outcomes across the same modality. Interestingly, as they progressed to phase 2, each variable diverged, forming a square-shaped geometry that captures low-dimensional representations for both value and modality. Subsequently, this geometric arrangement stretched along the value axis during the delay period of phase 3, resulting in a rectangular-shaped geometry that reflects stronger similarity in representations for the same values compared to the same modalities (Supplementary Fig. S5D). Our comprehensive analytical approaches, including CCGP and population responses in latent dimensions, successfully demonstrated that bimodal value neurons form low-dimensional representations. Furthermore, this neural geometry exhibited dynamic changes as the trial progressed, primarily stretching out along the value axis (Fig. 3D and Supplementary Fig. S5D).
Clinical Perspective — Dr. Mohit Joshi, Psychiatry
Workflow: As I assess patients' behavior, I'm now considering the role of the primate putamen in processing value information from tactile and visual inputs, which could change how I approach patient evaluations. The study's use of tactile and visual value reversal tasks (T-VRT and V-VRT) highlights the complexity of value encoding, and I'd take this into account when designing treatment plans. The finding that monkeys adapted their behavior within four trials after a value reversal suggests that patients may also rapidly adjust to new value associations.
Economics: The article doesn't address cost directly, but the use of single-unit recording to identify value-coding neurons in the primate putamen could have implications for the development of new treatments or therapies. If these findings are translated to human patients, it could potentially lead to more targeted and efficient interventions, which could have economic benefits in the long run. However, more research is needed to determine the potential cost impact of these findings.
Patient Outcomes: The study's findings on the types of value-coding neurons in the primate putamen, including tactile-selective, visual-selective, and bimodal value neurons, could have significant implications for patient outcomes. For example, understanding how patients process value information from different modalities could help me develop more effective treatment plans, particularly for patients with conditions that affect the putamen. The fact that monkeys responded more quickly to good stimuli in T-VRT than V-VRT suggests that patients may also exhibit modality-specific responses to different types of stimuli.
Transparency & Corrections
HCP Connect is funded by Stravent LLC and maintains editorial independence from advertisers and pharmaceutical companies. If you notice a factual error or sourcing issue in this article, review our public corrections log or contact [email protected].