Why are the ERP amplitudes recorded and analyzed from different software so different in voltage?

Why are the ERP amplitudes recorded and analyzed from different software so different in voltage?

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Here are the two ERPs from the same study design, though different subjects. The largest amplitude in the 1st one goes up to 25 uv, while only goes up to 6 in the 2nd figure.

Differences: The 1st one were recorded with 60-channel Easycap, analyzed in Brain Vision, nose referenced; while the 2nd one were recorded with EGI 128-channel Geodesic Net, analyzed in Net Station, averaging referenced. Are these sufficient to explain the big difference in the voltages? Thank you!

The short answer is yes, probably.

Referencing is the process of subtracting a referent signal from all of your channels. While recording, the EEG data are typically online referenced to a vertex electrode, earlobes, back of the neck etc. During post collection processing EEG are most often rereferenced to the average reference, or the linked mastoids. The reference that is selected changes the shape of the signal simply because the reference channel (or average signal if using more than one) are subtracted from every channel. This means that comparing results across data that have different references is not direct. Since referencing is linear it can typically be undone and the reference channel can be changed. I never use a nasion reference so I can't say for certain why the amplitude is so large but I would guess if you looked at the signal at that channel you would see how it is adding (if channel of interest is positive and referant is negative then amplitude goes up) to your electrode of interest or subtracting from it (if both are positive). For a P3 response 6 micro volts is reasonable. If you want a more detailed explanation of referencing I recommend Nunez's book, or another book on EEG data analysis and theory.


The aim of this research was to investigate the theory of neural desensitization as a direct consequence of playing VVGs the research hypotheses were not supported. Reduced P300 amplitudes for violent images have been reported as a marker of aggression and neural desensitization, 55 and this was the result of long- and short-term exposure to VVGs. 8,14–19,64,65 In this study, there was no significant differences in the baseline measurements of the P300 amplitudes at any site, (P3, Pz, and P4). This was the case for both the neutral and violent images between the VVG players and non-VVG players. This indicated that, in this study, experience of playing VVGs had no impact on this measure as there was no neural desensitization. 55 In addition, post-VVG play, there was no significant difference in P300 amplitudes across sites or images, between the VVG gamers and the nongamers. This indicated that there were no short-term neural effects of playing VVGs and again there was no neural desensitization. 55

There could be a number of possible explanations for the results, although this study utilized a different measuring technique, the same principles were applied, and methodological procedures were followed that were appropriate for EEG research. 34,37,38,57,58,60–62 The reported VVG desensitization expressed as a reduction of P300 amplitudes across sites, after exposure to violent stimuli, 17,55,56 may have been due to the inclusion of methodological elements not appropriate for the research aims and an artifact of experimental design. In addition, images unrelated to the goals of the study were included 55 however, there was no information pertaining to how many of these images were included and at what stage they were presented to the participants. It is possible that this inclusion had an impact on the results. There is the possibility that the available media in society has resulted in a general habituation to violent content. 54 The elements of a video game that participants experience when playing in a naturalistic setting, not related to the violent content such as competition and failure, it is conceivable that these factors may be related to the expression of aggression. 20,29,31,32

The functional significance of the P300 is yet to be determined 39 and it has been associated with a diverse range of functions. 40,45–53 There is a strong possibility that P300 amplitudes are not a measure of desensitization to violence and a marker of aggression. 55 There was no theoretical or empirical evidence in the literature reporting the P300 as a measure of the expression of aggression. It was proposed that the P300 represented the response of the locus coeruleus–norepinephrine system to the outcome of internal decision-making processes and the consequent effects of noradrenergic potentiation of information processing. 40 This was interpreted as the P300 being a measure of aversive behavior with lower values representing higher aggression. 55 The results of this study do not support this theory. The violent images elicited significantly higher P300 amplitudes than the neutral images, at P3 and P4, and this was consistent with previous research. 60,70–72 A possible explanation is that this study and the neural desensitization study 55 are congruent with affective imagery research 70–73 and the neural response to affective imagery was measured and not aggression. In this study, there was no significant difference for P300 amplitudes at the Pz electrode site between neutral and violent images. However, this was consistent with the variation reported within the field for the comparison of neural and violent images at the Pz electrode site. Significant differences have been reported at this site, 74 nonsignificant differences have been reported at this site 75 and it has also been suggested that the Pz electrode site is not of importance for this comparison. 70,76,77

In this study, a significant main effect of time at the P4 electrode site was recorded with the P300 amplitudes being large for both violent and neutral images after playing the VVG. It has been demonstrated that the right hemisphere of the brain is superior in processing images, 78 the hemisphere for the P4 electrode. In addition, enhanced auditory, 79 visual, 79 and cognitive 80 processing abilities have been recorded in individuals after playing video games. One possible explanation for the results of this study was that playing the VVG stimulated a region in the participant's brains associated with processing imagery and the larger P300s were a neural representation of this enhancement.

The results of this study did not support the theory of neural desensitization as a direct consequence of playing VVGs, 55 these results were congruent with researchers 63 using fMRI techniques. It was proposed that no support was found due to the fact that confounding methodological elements were not included in the experimental design that had been observed in research supporting the theory of desensitization. 64,65 The limitations of the GAM have been extensively discussed 36 and the results of this research indicate that more research needs to be conducted to evaluate the neurological aspect of the GAM. Assigning participants to conditions based upon VVG experience appeared to be a nondiscriminatory variable. It has been suggested that this separation was inappropriate for research purposes 81 and further support was provided in the data from this study. The mean P300 amplitudes, standard deviations, and results of the analysis suggested that the data were drawn from one sample not two discrete groups. 82

This was the first study that has evaluated the reported phenomena of neural desensitization as a result of VVG playing by examining the P300 component of ERPs. The results did not provide evidence to support this theory and there were no neural deficits associated with long- and short-term exposure to VVGs. During the data collection phase of this research, it was observed that it was becoming increasingly difficult to recruit participants who have not played VVGs and this was demonstrated by the integration into society of titles such as Fortnite. 2 The number of unit sales of such titles shows that VVGs are a recreational activity for a large proportion of the general population spanning cultures, generations, and gender. The simplest explanation for the current findings could be that there may be no relationship between VVGs, desensitization, and the expression of aggression. 35 However, as other researchers 83 have stated, the players, and not the games, may be a more appropriate paradigm for future research.


The aim was to investigate behavioral reactions and event-related potential (ERP) responses in healthy participants under conditions of personalized attribution of emotional appraisal vocabulary to one-self or to other people. One hundred and fifty emotionally neutral, positive and negative words describing people’s traits were used. Subjects were asked to attribute each word to four types of people: one-self, loved, unpleasant and neutral person. The reaction time during adjectives attribution to one-self and a loved person was shorter than during adjectives attribution to neutral and unpleasant people. Self-related adjectives induced higher amplitudes of the N400 ERP peak in the medial cortical areas in comparison with adjectives related to other people. The amplitude of P300 and P600 depended on the emotional valence of assessments, but not on the personalized attribution. The interaction between the attribution effect and the effect of emotional valence of assessments was observed for the N400 peak in the left temporal area. The maximal amplitude of N400 was revealed under self-attributing of emotionally positive adjectives. Our results supported the hypothesis that the emotional valence of assessments and the processing of information about one-self or others were related to the brain processes that differ from each other in a cortical localization or time dynamics.


The current work investigates the neural basis of individual face processing and its temporal dynamics through the application of pattern analysis and image reconstruction to EEG data. This investigation yields several notable outcomes as follows.

First, we find that EEG data support facial identity discrimination. By and large, this finding confirms the possibility of EEG-based pattern classification of facial identity across changes in expression from EEG data (Nemrodov et al., 2016). Discrimination peaks were identified in the proximity of the N170 and N250 ERP components, consistent with univariate analyses pointing to the relevance of the former (Itier and Taylor, 2002 Heisz et al., 2006 Jacques et al., 2007 Caharel et al., 2009) and the latter (Schweinberger et al., 2002 Huddy et al., 2003 Tanaka et al., 2006) for face processing. The onset of discrimination, around 150 ms, was intermediary to early estimates in the vicinity of P1 (Nemrodov et al., 2016) and later estimates around 200 ms as reported with MEG (Vida et al., 2017). One possibility is that early estimates, along with higher levels of discrimination, can be triggered by the use of low-level image properties (Cauchoix et al., 2014 Ghuman et al., 2014). In line with this consideration, we found that within versus across-expression discrimination produced earlier and consistently higher levels of discrimination accuracy. Importantly though, across-expression classification, which aims to minimize reliance on low-level cues, exhibited robust levels of discrimination across an extensive interval (i.e., from ∼150 ms onwards) while its time course was also mirrored by that of neural-behavioral correlations in the context of pairwise face similarity.

Second, temporally cumulative analyses targeted identity discrimination across a broad interval between 50 and 650 ms after stimulus onset. Despite the increase in dimensionality for the classification patterns, these data supported even more robust levels of accuracy for both within and across-expression discrimination, consistent with the presence of relevant information at multiple time points. Moreover, the superior levels of discrimination obtained with temporally cumulative data, as opposed to 10-ms windows, agrees with the presence of distinct sources of information at different time points. That is, we relate the boost in classification accuracy with the ability to exploit complementary information about facial identity at different times. Interestingly, this conclusion echoes that based on the lack of temporal generalization found with cross-temporal object decoding of MEG data (Carlson et al., 2013 Isik et al., 2014), specifically, the lack of classification success with using training and testing data from distinct time intervals has been taken as evidence for the presence of different types of information over time (Grootswagers et al., 2017). Further, the boost in classification noted above is important for practical purposes: it suggests that investigations that place less emphasis on clarifying the time course of discrimination can be better served by exploiting patterns across larger temporal intervals. Accordingly, our subsequent investigations into face space structure and image reconstruction were conducted with both window-based and cumulative data.

Third, a neural-based estimate of face space was constructed from EEG data and its organization was explained by the presence of visual information captured by CIMs. This result is significant in that it confirms that pattern discrimination relies, at least partly, on relevant visual information (e.g., as opposed to higher-level semantic cues). More importantly, we note that neural-based face space has been examined in the context of fMRI (Loffler et al., 2005 Rotshtein et al., 2005 Gao and Wilson, 2013) and monkey neurophysiology (Leopold et al., 2006 Freiwald et al., 2009). Yet, many of its properties, as related to facial identity representation, remain to be clarified. For instance, its invariant structure across different types of image transformation remains to be assessed and quantified. Behavioral research suggests that face space topography is largely invariant across viewpoint and lighting (Blank and Yovel, 2011). Here, we reach a similar conclusion regarding the expression invariance of neural-based face space as derived from EEG data.

Fourth, image reconstruction was conducted with the aid of CIM features derived directly from the structure of EEG data (i.e., as opposed to predefined visual features selected due to their general biological plausibility). This endeavor builds on pattern classification while, critically, it validates its results by showcasing its reliance on relevant visual information encoded in the EEG signal. We found that multiple temporal intervals supported better-than-chance reconstruction for both neutral and happy faces with a peak in the proximity of the N170 component. Also, reconstruction accuracy was further boosted by considering temporally cumulative information, as used for pattern classification. More importantly, these results are notable in that, unlike previous work with fMRI-based facial image reconstruction (Cowen et al., 2014 Nestor et al., 2016), they exploit invariant face space information for reconstruction purposes. Thus, arguably the current findings speak to the visual nature of facial identity representations rather than just to lower-level pictorial aspects of face perception.

Further, the current work provides proof of principle for EEG-based image reconstruction. Importantly, not only does this demonstrate the applicability of image reconstruction to neuroimaging modalities other than fMRI but, critically, it shows that EEG-based reconstruction can compete in terms of overall accuracy with its fMRI counterpart (Nestor et al., 2016).

Thus, here we build on previous EEG investigations of face processing and on pattern analyses of neuroimaging data to address several theoretical and methodological issues. In particular, the current work capitalizes on previous attempts at clarifying the temporal profile of individual face processing via linear classification of spatiotemporal EEG patterns across facial expression (Nemrodov et al., 2016). In agreement with this previous work we find that individual faces can be discriminated from their corresponding EEG patterns, that their time course exhibits an extended interval of significant discrimination and that multiple discrimination peaks occur, including an early one in the vicinity of the N170 component. Unlike this previous work though, which only relied on a restricted set of eight male and female faces, we find that such discrimination can be performed even with a large, homogenous set of face images controlled for low and high-level face properties (e.g., through geometrical alignment and intensity normalization of 108 white male face images). Hence, differences in discrimination onset across studies (i.e., 70 ms in this previous work vs 152 ms here) are likely related to reliance on idiosyncratic image differences within a small stimulus set in this previous investigation. More importantly though, not only does the current work examine the time course of individual face classification in a more reliable and thorough manner but, critically, it utilizes its outcomes for the purpose of facial feature derivation and image reconstruction.

Naturally, boosting even further classification and reconstruction accuracy is an important future endeavor. Regarding classification, this could be achieved, for instance, through efficient techniques for feature selection, such as recursive feature elimination (Hanson and Halchenko, 2008 Nestor et al., 2011), aimed at reducing pattern dimensionality and optimizing discrimination performance. Since electrodes are likely to carry irrelevant or redundant information at multiple time points, eliminating this information from higher-dimensional spatiotemporal patterns (e.g., across all electrodes and time points) could benefit classification. Regarding reconstruction, more complex, biologically-plausible approaches can be developed, for instance, by considering shape and surface information separately within the reconstruction process. Since shape and surface provide complementary cues to face processing (Jiang et al., 2006 Andrews et al., 2016), it would be informative to derive separate types of CIMs corresponding to this distinction and to consider their separate contribution to facial image reconstruction.

Notably, however, beyond overall performance, EEG-based reconstruction stands out by its ability to clarify the dynamics of visual representations as they develop in response to a given stimulus. For instance, it can speak to how a percept evolves over time in response to a static stimulus, as attempted here, by inspecting image reconstruction across consecutive time windows. Alternatively, this method could be extended to recover fine-grained dynamic information as present in moving stimuli. While reconstruction of natural movies has been previously conducted with fMRI (Nishimoto et al., 2011), the superior temporal resolution of EEG could make this modality a more efficient choice for the recovery of dynamic visual information. Further, we note that the comparatively wide availability of EEG systems could also render this modality the preferred choice for the development of new types of image-reconstruction brain-computer interfaces.

Last, while the current investigation focuses on faces as a visual category of interest, we argue that the present methodological approach can inform individual object recognition more generally. This is theoretically suggested by the presence of common neurocomputational principles underlying face and object identification (Cowell and Cottrell, 2013 Wang et al., 2016) as well as, methodologically, by the ability to evaluate the dynamics of invariant object recognition (Isik, et al., 2014). Particularly encouraging in this sense is the success of efforts to construct and characterize object similarity spaces from MEG (Carlson et al., 2013) and EEG data (Kaneshiro et al., 2015).

To conclude, our investigation targets the neural dynamics of face processing as reflected by EEG patterns. Our findings shed new light on the time course of facial identity processing while providing a way to extract and to assess the underlying visual information. Last, from a methodological standpoint, our results establish the feasibility of EEG-based image reconstruction and, more generally, they confirm the rich informational content of spatiotemporal EEG patterns.


Experiment 1 required subjects to study mixed lists of words and pictures that they were asked to read/name aloud followed by a recognition test with words that were either studied words (word/word), the names of studied pictures (picture/word), or not studied (new). Pilot testing insured that accuracy would be higher in the picture/word than word/word conditions. We predicted that 300–500 msec mid-frontal FN400 ERP old/new effects should be larger for word/word than picture/word conditions, but 500–800 msec parietal old/new effects should be larger for picture/word than word/word conditions. This prediction is consistent with trends reported in previous ERP studies (Ally & Budson, 2007 Schloerscheidt & Rugg, 2004), other ERP experiments showing that the FN400 is enhanced by study/test congruity (Nyhus & Curran, 2009 Ecker et al., 2007a, 2007b Groh-Bordin et al., 2006), Boldini et al.'s (2007) speed–accuracy tradeoff study showing an early word/word advantage followed by a later picture/word advantage, as well as remember/know studies suggesting a recollective basis for the picture superiority effect (Rajaram, 1996 Dewhurst & Conway, 1994).



Thirty-two right-handed participants were paid $15/hour or given credit for a University of Colorado course requirement. Of these 32, data from 5 participants were discarded because of excessive eye movement artifacts or bad electrodes. Of the 27 subjects retained for analysis, 15 were women.


The experiment included three conditions that were all manipulated within subjects and mixed within lists: studied words, studied pictures, and new (nonstudied) words.


Three hundred line drawings of common objects and their corresponding names comprised the experimental stimuli. For each subject, stimuli were randomly assigned to one of three conditions: studied pictures, studied words, or nonstudied. All pictures and words were obtained from the IPNP on-line database (∼aszekely/ipnp/ Szekely et al., 2004), which includes 174 of the stimuli from Snodgrass and Vanderwart (1980). All stimuli were chosen according to word length (<10 letters) and percent name agreement (>82%), which is the percent of trials that Szekely et al.'s (2004) participants named a picture with the corresponding target word. An additional 26 practice stimuli (10 pictures, 16 words) and 20 buffer stimuli (10 pictures, 10 words) did not meet these criteria. The pictures were presented in black and white, and the words were presented in white on a black background. All stimuli were viewed on an LCD computer monitor.


Participants were given verbal and written instruction for a practice version of the experiment, which was identical to one experimental study and test block in every respect but length (10 study pictures, 10 study words, and 6 new test words during the practice). After completing the practice test, the participant was fitted with a Geodesic Sensor Net. Following net application, the subjects took approximately 1.5 hours to complete five study/test blocks.

Each of five study lists included 20 pictures and 20 words, shown one at a time alternating between picture and word format. There were two nontested buffers at the beginning and end of each study list to absorb primacy and recency effects. Each studied picture or word appeared on the center of the screen for 2000 msec with a 1000-msec ISI. Subjects were instructed to name each picture or read each word out loud, while the experimenter wrote down their verbal responses.

During each test list, subjects were presented with words belonging to three different conditions: 20 words that were studied in word format (word/word), 20 words that were names of studied pictures (picture/word), and 20 new nonstudied words. Before each word, there was a randomly timed fixation cross (between 500 and 1000 msec). Each word appeared for 2 sec after which the participant was shown a question mark (“?”). Subjects were instructed to withhold their responses until the question mark appeared, otherwise they were informed that they responded too quickly. A 1000-msec ISI followed each response. Participants indicated whether the test word was old or new by pressing one of two keys on a vertically aligned response box with their right and left index fingers. Assignment of left/right fingers to old/new keys was counterbalanced across subjects. Subjects were instructed to limit blinking and movement. Subject-timed blink breaks were given after every 15 words.

EEG/ERP Methods

Scalp voltages were collected with a 128-channel HydroCel Geodesic Sensor Net connected to AC-coupled, 128-channel, high-input impedance amplifiers (200 MΩ, Net Amps Electrical Geodesics Inc., Eugene, OR). Amplified analog voltages (0.1–100 Hz bandpass) were digitized at 250 Hz. Individual sensors were adjusted until impedances were less than 50 kΩ.

The EEG was digitally low-pass filtered at 40 Hz prior to ERP analysis. Trials were discarded from analysis if they contained incorrect responses or more than 20% of the channels were bad (average amplitude over 100 μV or voltage fluctuations of greater than 50 μV between adjacent samples). Individual bad channels were replaced on a trial-by-trial basis with a spherical spline algorithm (Srinivasan, Nunez, Silberstein, Tucker, & Cadusch, 1996). Eye movements were corrected using an ocular artifact detection algorithm (Gratton, Coles, & Donchin, 1983). EEG was measured with respect to a vertex reference (Cz). ERPs were re-referenced to an average reference, the voltage difference between that channel and the average of all channels, to minimize the effects of reference site activity and to improve estimates of electrical field topography (Dien, 1998). The average reference was corrected for the polar average reference effect (Junghöfer, Elbert, Tucker, & Braun, 1999). ERPs were baseline corrected to a 200-msec prestimulus recording interval.


All p values from repeated measures ANOVAs were corrected for violations of the sphericity assumption using the method of Geisser and Greenhouse (1958).

Behavioral Results


Proportion correct was analyzed in a three-condition (new, picture, and word) repeated measures ANOVA. The main effect of condition was significant [F(2, 52) = 43.60, MSE = 0.01, p < .0001]. Subjects were significantly more accurate for pictures than words, demonstrating a picture superiority effect [F(1, 52) = 77.07, MSE = 0.01, p < .0001 see Figure 1, left].


Bader M, Lasser L: German verb-final clauses and sentence processing: Evidence for immediate attachment. Perspectives on sentence processing. Edited by: Clifton CJ, Frazier L, Rayner K, Hillsdale NJ. 1994, Lawrence Erlbaum Associates, 225-242.

Bader M, Meng M: Subject-object ambiguities in German embedded clauses: An across-the-board comparison. Journal of Psycholinguistic Research. 1999, 28: 121-143. 10.1023/A:1023206208142.

Bornkessel I, Schlesewsky M, Friederici AD: Grammar overrides frequency: Evidence from the online processing of flexible word order. Cognition. 2002, 85: B21-B30. 10.1016/S0010-0277(02)00076-8.

Kamide Y, Altmann GTM, Haywood SL: Prediction and thematic information in incremental sentence processing: Evidence from anticipatory eye movements. Journal of Memory and Language. 2003, 49: 133-156. 10.1016/S0749-596X(03)00023-8.

Miyamoto ET: Case markers as clause boundary inducers in Japanese. Journal of Psycholinguistik Research. 2002, 31: 307-347. 10.1023/A:1019540324040.

Yamashita H: The effects of word-order and case marking information on the processing of Japanese. Journal of Psycholinguistic Research. 1997, 26: 163-188. 10.1023/A:1025009615473.

Kuno S: The structure of the Japanese language. 1973, Cambridge, MA: MIT Press

Kutas M, Hillyard SA: Reading senseless sentences: Brain potentials reflect semantic incongruity. Science. 1980, 203-205. 10.1126/science.7350657.

Kutas M, Hillyard S: Brain potential during reading reflect word expectancy and semantic association. Nature. 1984, 307: 161-63. 10.1038/307161a0.

Kutas M, Federmeier KD: Electrophysiology reveals semantic memory use in language comprehension. Trends in Cognitive Neuroscience. 2000, 4: 463-470. 10.1016/S1364-6613(00)01560-6.

Friederici AD: Towards a neural basis of auditory sentence processing. Trends in Cognitive Sciences. 2002, 6: 78-84. 10.1016/S1364-6613(00)01839-8.

Neville HJ, Nicol JL, Barss A, Forster KI, Garrett MF: Syntactically based sentence processing classes -evidence from event-related brain potentials. Journal of Cognitive Neuroscience. 1991, 3: 151-165.

Friederici AD, Pfeifer E, Hahne A: Event-related brain potentials during natural speech processing -effects of semantic, morphological and syntactic violations. Cognitive Brain Research. 1993, 1: 183-192. 10.1016/0926-6410(93)90026-2.

Coulson S, King JW, Kutas M: Expect the unexpected: Event-related brain response to morphosyntactic violations. Language and Cognitive Processes. 1998, 13: 21-58. 10.1080/016909698386582.

Gunter TC, Friederici AD, Schriefers H: Syntactic gender and semantic expectancy: ERPs reveal early autonomy and late interaction. Journal of Cognitive Neuroscience. 2000, 12: 556-568. 10.1162/089892900562336.

Osterhout L, Holcomb PJ: Event-related brain potentials elicited by syntactic anomaly. Journal of Memory and Language. 1992, 31: 785-786. 10.1016/0749-596X(92)90039-Z.

Osterhout L, Holcomb PJ, Swinney DA: Brain potentials elicited by garden-path sentences: Evidence of the application of verb information during parsing. Journal of Experimental Psychology – Learning, Memory and Cognition. 1994, 20: 786-803. 10.1037/0278-7393.20.4.786.

Weyerts H, Penke M, Dohrn U, Clahsen H, Münte T: Brain potentials indicate differences between regular and irregular German plurals. NeuroReport. 1997, 8: 957-962.

Kaan E, Harris A, Gibson E, Holcomb PJ: The P600 as an index of integration difficulty. Language and Cognitive Processes. 2000, 15: 159-201. 10.1080/016909600386084.

Friederici AD, Hahne A, Saddy D: Distinct neurophysiological patterns reflecting aspects of syntactic complexity and syntactic repair. Journal of Psycholinguistic Research. 2002, 31: 45-63. 10.1023/A:1014376204525.

Frisch S, Schlesewsky M, Saddy D, Alpermann A: The P600 as an indicator of syntactic ambiguity. Cognition. 2002, 85 (3): B83-B92. 10.1016/S0010-0277(02)00126-9.

Frisch S, Schlesewsky M: The N400 reflects problems of thematic hierarchizing. Neuroreport. 2001, 12 (15): 3391-3394. 10.1097/00001756-200110290-00048.

Frisch S, Schlesewsky M: The resolution of case conflicts from a neurophysiological perspective. Cognitive Brain Research. 2005, 25 (2): 484-498. 10.1016/j.cogbrainres.2005.07.010.

King JW, Kutas M: Who did what and when – using word level and clause-level ERPs to monitor working-memory usage in reading. Journal of Cognitive Neuroscience. 1995, 7: 376-395.

Fiebach CJ, Schlesewsky M, Friederici AD: Separating syntactic memory costs and syntactic integration costs during parsing: the processing of German WH-questions. Journal of Memory and Language. 2002, 47: 250-272. 10.1016/S0749-596X(02)00004-9.

Osterhout L, McKinnon R, Bersick M, Corey V: On the language specifity of the brain response to syntactic anomalies: Is the syntactic positive shift a member of the P300 family?. Journal of Cognitive Neuroscience. 1996, 8: 507-526.

Frisch S, Kotz SA, von Cramon DY, Friederici AD: Why the P600 is not just a P300: The role of the basal ganglia. Clinical Neurophysiology. 2003, 114: 336-340. 10.1016/S1388-2457(02)00366-8.

van Herten M, Kolk HHJ, Chwilla DJ: An ERP study of P600 effects elicited by semantic anomalies. Cognitive Brain Research. 2005, 22 (2): 241-255. 10.1016/j.cogbrainres.2004.09.002.

van Herten M, Chwilla DJ, Kolk HHJ: When heuristics clash with parsing routines: ERP evidence for conflict monitoring in sentence perception. Journal of Cognitive Neuroscience. 2006, 18 (7): 1181-1197. 10.1162/jocn.2006.18.7.1181.

Schlesewsky M, Bornkessel I: On incremental thematic interpretation: Degrees of meaning accessed during sentence comprehension. Lingua. 2004, 114: 1213-1234. 10.1016/j.lingua.2003.07.006.

Lamers MJA, Jansma BM, Hammer A, Münte TF: Neural correlates of semantic and syntactic processes in the comprehension of case markerd pronouns: Evidence from German and Dutch. BMC Neuroscience. 2006, 7 (23).

Hopf JM, Bayer J, Bader M, Meng M: Event-related brain potentials and case information in syntactic ambiguities. Journal of Cognitive Neuroscience. 1998, 10: 264-280. 10.1162/089892998562690.

Friederici AD, Frisch S: Verb Argument Structure Processing: The Role of Verb-Specific and Argument-Specific Information. Journal of Memory and Language. 2000, 43: 476-507. 10.1006/jmla.2000.2709.

Osterhout L, McLaughlin J, Kim A, Greenwald R, Inoue K: Sentences in the Brain: Event-Related Potentials as Real-Time Reflections of Sentences. The On-line Study of Sentence Comprehension: Eye-Tracking, ERPs and Beyond. Edited by: Carreiras M, Clifton C. 2004, New York: Psychology Press, 271-209.

Mueller JL, Hahne A, Fujii Y, Friederici AD: Native and non-native speakers' processing of a miniature version of Japanese as revealed by ERPs. Journal of Cognitive Neuroscience. 2005, 17 (8): 1229-1244. 10.1162/0898929055002463.

Primus B: Cases and Thematic Roles. 1999, Tübingen: Niemeyer

Kuperberg GR, Sitnikova T, Caplan D, Holcomb PJ: Electrophysiological distinctions in processing conceptual relationships within simple sentences. Cognitive Brain Research. 2003, 17: 117-129. 10.1016/S0926-6410(03)00086-7.

Kuperberg GR, Caplan D, Sitnikova T, Eddy M, Holcomb PJ: Neural correlates of processing syntactic, semantic, and thematic relationships in sentences. Language and Cognitive Processes. 2006, 21 (5): 489-530. 10.1080/01690960500094279.

Kolk HHJ, Chwilla DJ, van Herten M, Oor PJW: Structure and limited capacity in verbal working memory: A study with event-related potentials. Brain and Language. 2003, 85: 1-36. 10.1016/S0093-934X(02)00548-5.

Weber-Fox CM, Neville HJ: Maturational constraints on functional specializations for language processing: ERP and behavioral evidence in bilingual speakers. Journal of Cognitive Neuroscience. 1996, 8 (3): 231-256.

Hahne A: What's different in second-language processing? Evidence from event-related brain potentials. Journal of Psycholinguistic Research. 2001, 30: 251-266. 10.1023/A:1010490917575.

Hahne A, Friederici AD: Processing a second language: Late learner's comprehension mechanisms as revealed by event-related potentials. Bilingualism. 2001, 4: 123-141. 10.1017/S1366728901000232.

Ojima S, Nakata H, Kakigi R: An ERP study of second language learning after childhood: Effects of proficiency. Journal of Cognitive Neuroscience. 2005, 17 (8): 1212-1228. 10.1162/0898929055002436.

McLaughlin J, Osterhout L, Kim A: Neural correlates of second-language word learning: minimal instruction produces rapid change. Nature Neuroscience. 2004, 7: 703-704. 10.1038/nn1264.

Ardal S, Donald MW, Meuter R, Muldrew S, Luce M: Brain responses to semantic incongruity in bilinguals. Brain and language. 1990, 39: 187-205. 10.1016/0093-934X(90)90011-5.

Moreno EM, Kutas M: Processing semantic anomalies in two languages: an electrophysiological exploration in both languages of Spanish-English bilinguals. Cognitive Brain Research. 2005, 22: 205-220. 10.1016/j.cogbrainres.2004.08.010.

Johnson JS, Newport EL: Critical period effects in second language learning: The influence of maturational state on the acquisition of English as a second language. Cognitive Psychology. 1989, 21: 60-99. 10.1016/0010-0285(89)90003-0.

De Keyser RM: The robustness of critical period effects in second language acquisition. Studies in Second Language Acquisition. 2000, 22: 493-533.

Weckerly J, Kutas M: An electrophysiological analysis of animacy effects in the processing of object relative sentences. Psychophysiology. 1999, 36: 559-570. 10.1017/S0048577299971202.

Roehm D, Schlesewsky M, Bornkessel I, Frisch S, Haider H: Fractionating language comprehension via frequency characteristics of the human EEG. NeuroReport. 2004, 15: 409-412. 10.1097/00001756-200403010-00005.

Uehara K: Judgments of processing load in Japanese: The effect of NP-ga sequences. Journal of Psycholinguistic Research. 1997, 26: 255-263. 10.1023/A:1025069801360.

Uehara K: External possession constructions in Japanese: A psycholinguistic perspective. External possession. Edited by: Payne D, Barshi I. 1999, Amsterdam, Holland: John Benjamins, 45-74.

Hahne A, Mueller JL, Clahsen H: Morphological processing in a second language: Behavioral and ERP evidence for storage and decomposition. Journal of Cognitive Neuroscience. 2006, 18: 121-134. 10.1162/089892906775250067.

Friederici AD, Steinhauer K, Pfeifer E: Brain signatures of artificial language processing: Evidence challenging the Critical period hypothesis. PNAS. 2002, 99: 529-534. 10.1073/pnas.012611199.

Hoen M, Dominey PF: ERP analysis of cognitie sequencing: A left anterior negativity related to structural transformation processing. Neuroreport. 2000, 11 (4): 3187-3191. 10.1097/00001756-200009280-00028.

DeVincenci M: Syntactic parsing strategies in Italian. 1991, Dordrecht: Kluwer

Nenonen S, Shestakova A, Houtilainen M, Näätänen R: Linguistic relevance of duration within native language determines the accuracy of speech-sound duration processing. Cognitive Brain Research. 2003, 16: 492-495. 10.1016/S0926-6410(03)00055-7.

Frenck-Mestre C, Meunier C, Espesser R, Daffner K, Holcomb P: Perceiving nonnative vowels: The effect of context on perception as evidenced by event-related brain potentials. Journal of Speech, Language, and Hearing Research. 2005, 48: 1-15. 10.1044/1092-4388(2005/104).

Sanders LD, Neville HJ: An ERP study of continuous speech processing II. Segmentation, semantics and syntax in nonnative speakers. Cognitive Brain Research. 2003, 15: 214-227. 10.1016/S0926-6410(02)00194-5.

Proverbio AM, Cok B, Zani A: Electrophysiological measures of language processing in bilinguals. Journal of Cognitive Neuroscience. 2002, 14 (7): 994-1017. 10.1162/089892902320474463.

Rösler F, Pütz P, Friederici AD, Hahne A: Event-related brain potentials while encountering semantic and syntactic constraint violations. Journal of Cognitive Neuroscience. 1993, 5 (3): 345-362.

Hahne A, Friederici AD: Electrophysiological evidence for two steps in syntactic analysis: Early automatic and late controlled processes. Journal of Cognitive Neuroscience. 1999, 11: 194-205. 10.1162/089892999563328.

McCarthy G, Wood C: Scalp distributions of event-related potentials: an ambiguity associated with analysis of variance models. Electroencephalography and Clinical Neurophysiology. 1985, 62: 203-208. 10.1016/0168-5597(85)90015-2.

Picton TW, Bentin S, Berg P, Donchin E, Hillyard SA, Johnson R, Miller GA, Ritter W, Ruchkin DS, Rugg MD, Taylor MJ: Guidelines for using human event-related potentials to study cognition: Recording standards and publication criteria. Psychophysiology. 2000, 37: 127-152. 10.1017/S0048577200000305.

Urbach T, Kutas M: The intractability of scaling scalp distributions to infer neuroelectric sources. Psychophysiology. 2002, 39: 791-808. 10.1111/1469-8986.3960791.

Urbach T, Kutas M: Interpreting event-related brain potential (ERP) distributions: implications of baseline potentials and variabilty with application to amplitude normalization by vector scaling. Biological Psychology. 2006, 72: 333-343. 10.1016/j.biopsycho.2005.11.012.

Wilding E: The practice of rescaling scalp-recorded event-related potentials. Biological Psychology. 2006, 72: 325-332. 10.1016/j.biopsycho.2005.12.002.


According to contemporary accounts of visual working memory (vWM), the ability to efficiently filter relevant from irrelevant information contributes to an individual’s overall vWM capacity. Although there is mounting evidence for this hypothesis, very little is known about the precise filtering mechanism responsible for controlling access to vWM and for differentiating low- and high-capacity individuals. Theoretically, the inefficient filtering observed in low-capacity individuals might be specifically linked to problems enhancing relevant items, suppressing irrelevant items, or both. To find out, we recorded neurophysiological activity associated with attentional selection and active suppression during a competitive visual search task. We show that high-capacity individuals actively suppress salient distractors, whereas low-capacity individuals are unable to suppress salient distractors in time to prevent those items from capturing attention. These results demonstrate that individual differences in vWM capacity are associated with the timing of a specific attentional control operation that suppresses processing of salient but irrelevant visual objects and restricts their access to higher stages of visual processing.

Each day, human observers perform numerous tasks that require temporary storage of information about objects in the surrounding visual environment. Laboratory studies have revealed substantial variability across neurologically healthy adults in the ability to keep such visuospatial information in mind (1 ⇓ ⇓ –4). Originally, this variability was attributed to individual differences in the capacity of visual working memory (vWM). According to this account, the maximum amount of information that can be entered into vWM at one time, or the number of “slots” available to store the information, varies across individuals (3, 5 ⇓ ⇓ –8). Other contemporary accounts, however, relate the individual differences in vWM performance to variability in attentional control, as well as capacity (9 ⇓ ⇓ –12). One such attention-based perspective holds that when faced with multiple visual objects, low-capacity individuals have difficulty filtering relevant from irrelevant information (11 ⇓ ⇓ ⇓ –15). More specifically, this filtering-efficiency hypothesis proposes that attention regulates the flow of sensory information to the limited-capacity vWM system and that consuming capacity with task-irrelevant information effectively reduces storage capacity for task-relevant items. This hypothesis helps to explain why low-capacity individuals sometimes store more items in vWM than do high-capacity individuals: whereas high-capacity individuals encode only task-relevant items, low-capacity individuals encode irrelevant items along with task-relevant items (15).

Although there is mounting evidence for the filtering-efficiency hypothesis, little is known about the precise mechanism responsible for controlling access to vWM or how its operation differs in low- and high-capacity individuals. Theoretically, filtering can be achieved by enhancing the representation of a to-be-remembered item or by suppressing the representation of a to-be-ignored item (16). Accordingly, the inefficient filtering observed in low-capacity individuals might be linked to problems enhancing relevant items, problems suppressing irrelevant items, or both. Precise characterization of individual differences in filtering efficiency requires not only a method for determining what items gain access to vWM but also a method for isolating processes associated with the two diametrically opposed facets of filtering. Behavioral measures (e.g., negative priming) have been used to study the link between attention and vWM capacity (17, 18), but given the difficulty in linking such measures to specific processes (e.g., perceptual inhibition, memory retrieval), existing behavioral results do not clearly indicate whether individual differences in capacity are related to selective enhancement or suppression.

Researchers have started to develop event-related potential (ERP) methods to determine how attention-filtering capabilities vary as a function of vWM capacity. In one pair of studies (19, 20), participants were cued in advance to attend to the location of an impending visual target that was accompanied by at least one distractor item on the same side of fixation (with an equal number of items on the opposite side of fixation). After a brief interval, bilateral “probe” stimuli were presented to assess the spatial gradient of attention. ERPs elicited by the probes were used to compute an attention-gradient index, which was positive when attention was tightly focused at the target location and was near zero when attention was broadly distributed across the items in the cued hemifield. Low-capacity individuals were found to have a broader distribution of attention than high-capacity individuals. This finding could indicate that low-capacity individuals are unable to prevent the inadvertent capture of attention by nearby distractors (19), to boost the target’s representation over and above those of nearby distractors, or to maintain a tight focus of attention at the cued location before the appearance of the target display. At present, it is impossible to distinguish between these alternatives in part because the attention-gradient index that was used did not isolate target-selection and distractor-suppression processes separately.

In the present study, we recorded ERPs during a unidimensional variant of the additional singleton search paradigm and isolated specific components known to reflect stimulus selection (N2pc) and active suppression (distractor positivity, PD). The N2pc, an enhanced negative potential observed contralateral to attended targets, is a well-known electrophysiological index of attentional selection that emerges over the posterior scalp 180–200 ms after the appearance of a search array (21, 22). In contrast, the PD is an enhanced positive potential observed contralateral to task-irrelevant distractors in the same time interval (23, 24). Two key pieces of evidence indicate that the PD is associated with an active suppression process. First, the PD is present when observers must carefully inspect another task-relevant item (target) but is absent when observers merely have to detect the target (23). Second, the amplitude of the PD is predictive of the speed with which participants respond to a target on a trial-by-trial basis, with faster response times (i.e., less distraction) associated with larger PD amplitudes (24, 25). These findings indicate that the visual system resolves attentional competition in demanding identification tasks by suppressing potentially distracting items, but that the ability to suppress, and thus to prevent distraction, varies across trials.

Armed with these two electrophysiological indices of attention, we asked whether individuals with higher vWM capacities are better able to select items of interest or to suppress potentially distracting items. Participants searched multiitem displays for a prespecified color singleton while attempting to ignore other, task-irrelevant color singletons that could appear in the same displays. Each display contained eight or nine same-color nontargets, one yellow target, and on distractor-present trials, one red or blue distractor (Fig. 1). The color of the nontargets was varied (all green or all orange) to disentangle distractor salience from distractor color. Specifically, the red distractor was the most salient singleton against green nontargets, whereas the blue distractor was the most salient singleton against orange nontargets (this was confirmed in a behavioral pilot experiment SI Results). Target- and distractor-related ERPs were measured separately for individuals with low, medium, and high vWM capacities to determine whether the attentional deficits associated with low capacity are attributable to difficulties selecting an object of interest, actively suppressing irrelevant objects, or both.

ERPs elicited by displays containing a midline target and a lateral distractor for each nontarget condition. Time 0 reflects the onset of the search display, and negative voltage deflections are plotted above the x-axis, by convention. Waveforms were recorded over the lateral occipital scalp (electrodes PO7 and PO8). (A) ERPs recorded contralateral and ipsilateral to a low-salience distractor. (B) ERPs recorded contralateral and ipsilateral to a high-salience distractor.

Materials and Methods


Thirteen subjects (six females age range, 20–37 years) participated in the study. All had normal or corrected to normal vision and reported no history of neurological problems. Informed consent was obtained from all participants in accordance with the guidelines and approval of the Columbia University Institutional Review Board.


We used a set of 12 face (face database Max Planck Institute for Biological Cybernetics, Tuebingen, Germany) and 12 car grayscale images (image size, 512 × 512 pixels 8 bits/pixel). All images were equated for spatial frequency, luminance, and contrast. They all had identical magnitude spectra (average magnitude spectrum of all images in the database), and their corresponding phase spectra were manipulated using the weighted mean phase (Dakin, 2002) technique to generate a set of images characterized by their percentage of phase coherence. For the first experiment, we processed each image to have six different phase coherence values (20, 25, 30, 35, 40, and 45%) (Fig. 1B). In addition, for the second experiment, we colorized our images with subtle red and green tones. We performed this adjustment by manipulating the hue (H), saturation (S), and value color space of the original images (red: H = 0.04, S = 0.17 V, unchanged green: H = 0.34, S = 0.23 V, unchanged). A Dell (Round Rock, TX) Precision 530 Workstation with an nVidia (Santa Clara, CA) Quadro4 900XGL graphics card and E-Prime software (Psychological Software Tools, Pittsburgh, PA) controlled the stimulus display. A liquid crystal display projector (LP130 InFocus, Wilsonville, OR) was used to project the images through radio frequency-shielded window onto a front projection screen. Each image was subtended to 33 × 22° of the visual angle.

Behavioral paradigms.

Subjects performed two versions of a simple categorization task. In the first experiment, they had to discriminate between grayscale images of faces and cars. Within a block of trials, face and car images over a range of phase coherences were presented in random order. The range of phase coherence levels was chosen to span psychophysical thresholds. All subjects performed nearly perfectly at the highest phase coherence but performed near chance for the lowest one. In the second experiment, colorized face and car trials of 30 and 45% phase coherence were presented in random order. In this version of the experiment, subjects were presented with a visual cue for 400 ms that was followed by a 600 ms delay before the next image presentation. Based on the cue, subjects had to either discriminate face versus car or the color of the image (i.e., red vs green). Subjects reported their decisions by pressing one of two mouse buttons, left for faces (and red) and right for cars (and green), using their right index and middle fingers, respectively. All images were presented for 30 ms, followed by an interstimulus interval (ISI) that was randomized in the range of 1500–2000 ms. Subjects were instructed to respond as soon as they formed a decision and before the next image was presented. In both experiments, a total of 50 trials per behavioral condition were presented (i.e., an overall of 600 trials for each experiment). Schematic representations of the two behavioral paradigms are given in Figure 1, A and C, respectively. Trials in which subjects failed to respond within the ISI were marked as no-choice trials and were discarded from additional analysis.

Data acquisition.

EEG data were acquired in an electrostatically shielded room (ETS-Lindgren, Glendale Heights, IL) using a Sensorium (Charlotte, VT) EPA-6 Electrophysiological Amplifier from 60 Ag/AgCl scalp electrodes and from three periocular electrodes placed below the left eye and at the left and right outer canthi. All channels were referenced to the left mastoid with input impedance of <15 kΩ, and the chin electrode was used as ground. Data were sampled at 1000 Hz with an analog pass band of 0.01–300 Hz using 12 dB/octave high-pass and eighth-order elliptic low-pass filters. Subsequently, a software-based 0.5 Hz high-pass filter was used to remove DC drifts, and 60 and 120 Hz (harmonic) notch filters were applied to minimize line-noise artifacts. These filters were designed to be linear phase to minimize delay distortions. Motor response and stimulus events recorded on separate channels were delayed to match latencies introduced by digitally filtering the EEG.

Movement artifact removal.

Before the main experiment, subjects completed an eye movement calibration experiment during which they were instructed to blink repeatedly on the appearance of a white-on-black fixation cross and then to make several horizontal and vertical saccades according to the position of the fixation cross subtending 1 × 1° of the visual field. Horizontal saccades subtended 33°, and vertical saccades subtended 22°. The timing of these visual cues was recorded with EEG. This enabled us to determine linear components associated with eye blinks and saccades (using principal component analysis) that were subsequently projected out of the EEG recorded during the main experiment (Parra et al., 2003). Trials with strong eye movements or other movement artifacts were manually removed by inspection.

Data analysis.

We used a single-trial analysis of the EEG to discriminate between any two given experimental conditions (i.e., face vs car or easy vs hard). Logistic regression was used to find an optimal basis for discriminating between the two conditions over a specific temporal window (Parra et al., 2002, 2005). Specifically, we defined a training window starting at a poststimulus onset time τ, with a duration of δ, and used logistic regression to estimate a spatial weighting vector wτ,δ, which maximally discriminates between sensor array signals X for the two conditions as follows: in which X is an N × T matrix (N sensors and T time samples). The result is a “discriminating component” y, which is specific to activity correlated with one condition while minimizing activity correlated with both task conditions such as early visual processing. We use the term “component” instead of “source” to make it clear that this is a projection of all the activity correlated with the underlying source. For our experiments, the duration of the training window (δ) was 60 ms and the window onset time (τ) was varied across time. We used the reweighted least-squares algorithm to learn the optimal discriminating spatial weighting vector wτ,δ (Jordan and Jacobs, 1994).

The discrimination vector wτ,δ can be seen as the orientation (or direction) in the space of the EEG sensors that maximally discriminates between the two experimental conditions. Thus, the time dimension defines the time of a window (relative to the either the stimulus or response) used to compute this discrimination vector. Given a fixed window width (60 ms in this case), sweeping the training window from the onset of the visual stimulation to the earliest response time represents the evolution of the discrimination vector across time. Within a window, at a fixed time, all samples are treated as independent and identically distributed to train the discriminator. Once the discriminator is trained, it is applied across all time so as to visualize the projection of the trials onto that specific orientation in EEG sensor space. A discriminating component is defined as one such discrimination vector, with its activity visualized by projecting the data across all time onto that orientation. We call this visualization a discriminant component map. For instance, for recurring components, one would expect activity trained during one window time to also be present at another time.

To visualize the profile of these components (stimulus or response locked) across all trials, we constructed discriminant component maps. We aligned all trials of an experimental condition of interest to the onset of visual stimulation and sorted them by their corresponding reaction times (RTs). Therefore, each row of one such discriminant component map represents a single trial across time [i.e., yi(t)]. The discriminant component maps used in this study (see Fig. 7) represent face trials with the mean of the car trials subtracted (i.e., yfacesȳcar).

To provide a functional neuroanatomical interpretation of the resultant discriminating activity, and given the linearity of our model, we computed the electrical coupling coefficients for the linear model as follows: Equation 2 describes the electrical coupling a of the discriminating component y that explains most of the activity X. To compute these coefficients, y is computed for only times during the specific window used to calculate the weights for that component. Strong coupling indicates low attenuation of the component and can be visualized as the intensity of the “sensor projections” a. a can also be seen as a forward model of the discriminating component activity (Parra et al., 2002, 2005).

We quantified the performance of the linear discriminator by the area under the receiver operator characteristic (ROC) curve, referred to as Az, with a leave-one-out approach (Duda et al., 2001). We used the ROC Az metric to characterize the discrimination performance while sliding our training window from stimulus onset to response time (varying τ). Finally, to assess the significance of the resultant discriminating component, we used a bootstrapping technique to compute an Az value leading to a significance level of p = 0.01. Specifically, we computed a significance level for Az by performing the leave-one-out test after randomizing the truth labels of our face and car trials. We repeated this randomization process 100 times to produce an Az randomization distribution and compute the Az leading to a significance level of p = 0.01.

Traditional event-related potential (ERP) analysis was also performed by aligning the data to paradigm-specific events and averaging across trials as well as across subjects where appropriate. When ERP activity was used for additional analysis (e.g., ERP amplitude correlation with other experimental parameters), we averaged activity across short-length temporal windows (typically 40 ms in width) to make our estimates more robust. To visualize the spacial extent of the ERP activity across time, we computed average ERP scalp maps by interpolating the ERP activity across all electrode locations. We used a biharmonic spline interpolation (Sandwell, 1987) that is designed for irregularly spaced data points. All scalp maps were plotted using EEGLAB (Delorme and Makeig, 2004).

Discriminant component peak detection.

To quantify the spread/duration of a discriminant component, we performed single-trial peak detection by fitting a parametric function to the spatially integrated discriminating component y(t). For simplicity, we used a Gaussian profile [as in the study by Gerson et al. (2005)] that is parameterized by its height β, width σ, delay μ, and baseline offset α as follows: We computed the optimal parameters for each trial by using a nonlinear least-squares Gauss–Newton optimization (Gill et al., 1981). The center and width of the discriminating training window was used to initialize the optimization.

Diffusion model simulations.

The diffusion model assumes that fast two-choice decisions are made by a noisy process that accumulates information over time from a starting point toward one of two response criteria or boundaries, as in Figure 2B. The starting point is labeled z, and the boundaries are labeled “a” and “0.” When one of the boundaries is reached, a response is initiated. The rate of accumulation of information is called drift rate v, and it is determined by the quality of the information available from the stimulus. The better the information quality, the larger the drift rate toward the appropriate decision boundary and the faster and more accurate the response. Within-trial variability in the accumulation of information results in processes with the same mean drift rate terminating at different times (producing RT distributions) and sometimes at different boundaries (producing errors). Speed–accuracy tradeoffs are modulated by the positions of the boundaries as follows: moving boundaries closer to the starting point speeds responses and decreases accuracy. Response time distributions in two-choice tasks are positively skewed, which occurs naturally in the model by simple geometry: the increase in RT is larger if a lower value of drift rate is decreased by some amount than if a larger value of drift rate is decreased by the same amount. Besides the decision process, there are nondecision components of processing such as encoding and response execution (Fig. 2A). These processes are combined in the model, and their contribution to RT has mean Ter (Ratcliff and Tuerlinckx, 2002).

An illustration of the diffusion model. Parameters of the model are as follows: a, boundary separation z, starting point Ter, mean value of the nondecision component of RT η, SD in drift across trials sz, range of the distribution of starting point (z) across trials v, drift rate st, range of the distribution of nondecision times across trials s, SD in variability in drift within trials. A, Encoding processes (x) and response output processes (y) combine to give the nondecision component (z) with mean Ter. B, The diffusion process with two sample paths, RT distributions for correct and error responses, and all of the relevant parameters outlined above.

In the diffusion model, components of processing are assumed to vary from trial to trial. Variability in drift rate across trials (normally distributed with SD η) gives rise to error responses that are relatively slow compared with correct responses, and variability in starting point across trials (uniformly distributed with range sz) gives rise to relatively fast errors. Whether errors are faster or slower than correct responses for an experimental condition depends on the relative amounts of drift rate and starting point variability, drift rate values, and boundary positions (Ratcliff and Rouder, 1998 Ratcliff et al., 1999). Across-trial variability in Ter is uniformly distributed with range st (Ratcliff and Tuerlinckx, 2002).

The diffusion model serves to map performance characteristics onto underlying processes. From the probability of a correct response and the RT distributions for correct and error responses for each of the experimental conditions, the model extracts estimates of the quality of the stimulus information that enters the decision process for each condition (drift rate), the amount of information that must be accumulated before a decision can be made (boundary positions), the time taken by nondecision components of RT (Ter), and the amount of variability across trials in each of the processing components.

Here we fit the model to the data from the six subjects in the first experiment in which a range of experimental conditions (i.e., different phase coherence levels) were available. We used a χ 2 method (Ratcliff and Tuerlinckx, 2002) to perform the fits. We used the simulation results to relate different parameters of the diffusion model to our experimental observations.


We successfully provided neurophysiological evidence for early, automatic attention biases that are relevant for mate selection. The observed time course of the ERP effects suggests that both men and women automatically select physically distinctive faces for prioritized processing, but, in accord with our evolutionary hypothesis, only in men was this followed by enhanced evaluative processing associated with motivated attention to attractive opposite sex faces. With reference to the question that inspired this research, these results suggest that, in addition to threat-related stimuli, other evolutionary relevant information is also prioritized by our attention systems. Overall, our results bring the integration between evolutionary social psychology and cognitive neuroscience one step further, which we believe is necessary to fully understand the adapted human mind.

This study was carried out at the University of Kent. The authors would like to thank Brian Spisak for help with stimulus development and Keith Franklin for programming support. This work was funded by The Nuffield Foundation (URB 35850).

Impaired error-related processing in patients with first-episode psychosis and subjects at clinical high risk for psychosis: An event-related potential study

Impaired event-related potential (ERP) indices reflecting performance-monitoring systems have been consistently reported in patients with schizophrenia. However, whether these impairments exist from the beginning of the early phase of psychosis, such as in first-episode psychosis (FEP) patients and individuals at clinical high risk (CHR) for psychosis, has not yet been clearly ascertained.


Thirty-seven FEP patients, 22 CHR subjects, and 22 healthy controls (HC) performed a visual go/no-go task so that three ERP components associated with performance monitoring—error-related negativity (ERN), correct response negativity (CRN), and error positivity (Pe) —could be assessed. Repeated-measures analysis of variance (ANOVA) with age and sex as covariates was used to compare ERN, CRN, and Pe across the groups.


Repeated-measures ANOVA with age and sex as covariates revealed that compared with HC, FEP patients and CHR subjects showed significantly smaller ERN amplitudes at the Fz (F = 4.980, P = 0.009) and FCz (F = 3.453, P = 0.037) electrode sites. Neither CRN nor Pe amplitudes showed significant differences across the FEP, CHR, and HC groups.


These findings suggest that performance monitoring is already compromised during the early course of psychotic disorders, evident in FEP patients and CHR subjects, as reflected in the reduced ERN amplitude. Considering these findings, ERN could serve as a potential indicator of early-stage psychosis.

The ability to monitor one's own performance is fundamental and pertinent for goal-directed behaviors in social functioning 1 this ability enables individuals to integrate intended goals and real performances. Deficiency of performance monitoring, which has been consistently reported in patients with schizophrenia, is also related to impaired social functioning in patients with the disorder. 2 Furthermore, researchers have proposed that positive symptoms or thought disorders are caused by the inability to monitor behavior resulting from a discrepancy between internally generated action and externally induced action. 3, 4 Given these clinical implications for psychotic disorders, investigating the neural substrates of performance monitoring is essential for understanding psychotic disorders in depth.

To better comprehend the neural mechanism of performance monitoring, electrophysiological studies have identified three event-related potential (ERP) components associated with error-related processing or monitoring systems, namely, error-related negativity (ERN), correct response negativity (CRN), and error positivity (Pe). ERN is a negative deflection of the ERP wave observed following an erroneous response in behavioral tasks. 5, 6 CRN is a smaller negative deflection following a correct response, which occurs at the same time course and location as ERN, 7, 8 and it reflects conflict monitoring or partial error detection. 7 Pe is a positive deflection observed between 250 ms and 450 ms after the onset of an error response, which is associated with conscious error awareness or motivation to correct errors. 9, 10 Previous studies investigating these ERP components of performance monitoring have consistently reported reduced ERN 11-13 and enlarged CRN 12, 14 amplitudes in schizophrenia patients compared to those of healthy controls (HC). With regard to Pe, most of the studies reported Pe amplitudes to be normal in schizophrenia, 12, 15 with a few exceptions reporting a reduction in amplitude. 16, 17 Previous studies were performed in schizophrenia patients with a relatively long duration of illness 12, 13 thus, the results may have been confounded by the effect of aging, medication exposure, and disease chronicity.

It is unclear whether the ERP components of performance monitoring are also impaired in the early stages of psychosis, such as in first-episode psychosis (FEP) patients and in individuals at clinical high risk (CHR) for psychosis hence, these components should be explored during these stages of psychosis since past findings from chronic schizophrenia patients may have been affected by potential confounders, such as disease chronicity, exposure to antipsychotics, and relatively old age. In addition, as it is well known that a shorter duration of untreated psychosis (DUP) is essential for a better schizophrenia prognosis, 18 investigation of biomarkers that could identify the patients in the earlier stages of the disorder would aid in efforts to improve clinical outcomes. However, there has been only one study that has explored ERP components related to performance monitoring across the FEP, CHR, and HC groups. 16 In their seminal study, Perez et al. 16 reported that FEP patients showed smaller ERN, larger CRN, and smaller Pe amplitudes than HC. In addition, CHR subjects displayed smaller ERN amplitudes compared to HC, but CRN and Pe amplitudes in CHR subjects were comparable to those of HC. Despite the clinical implication of performance monitoring impairments in early psychosis patients, to the best of our knowledge, no follow-up study has been reported since the Perez et al. 16 study was published.

In summary, the current study aimed to investigate whether ERP components (i.e., ERN, CRN, and Pe) that reflect performance monitoring are compromised in the early stages of psychosis (i.e., FEP and CHR) as indicated by the previous findings in chronic schizophrenia patients and in the Perez et al. 16 study. 12 We hypothesized that FEP and CHR participants would show smaller ERN amplitudes, larger CRN amplitudes, and intact Pe amplitudes compared to the amplitudes of HC.

Watch the video: Νέος Ευρωπαϊκός Κανονισμός ErP (August 2022).