Information

What is a neurometric function?

What is a neurometric function?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Is the output of neurometric function the output of the pattern classifier i.e is it calculated from processing the imaging data of brain as a prediction of what choice was made through psychometric function?

What confuses me is whether neurometric function is a neural response to pattern classifier response process or it is a process leading to conversion of visual stimuli to neural response.


Labs and Research Topics

The department's research, in which students can take part in their practical study components, spans cutting edge topics such as multisensory integration, brain oscillations and behaviour, cortical plasticity, computational neuroscience, and pharmaco-neuroimaging among others. A variety of modern neuroscience tools and psychology labs are available to gain 'hands-on' experience in magnetic resonance imaging, magnetoencephalography, high-density electroencephalography, eye-tracking, transcranial magnetic and electric stimulation, and psychophysics. The Department of Psychology in Oldenburg is among the best-equipped in the country and characterised by its unique decision to dedicate its resources to this Master's course.

Applied neurocognitive psychology lab (Prof. Dr. Jochem Rieger)

We investigate neural processes in the sensation-perception-action-cycle in an interdisciplinary team. Central to our research are cutting edge brain decoding methods which we use to learn from invasive and non-invasive neuroimaging methods in humans how the brain accomplishes everyday tasks. The aim of our research is twofold. On the one hand we are interested in basic research questions on how the brain constructs percepts from environmental sensory data, represents percepts, makes decisions, and controls muscles to interact with the environment. On the other hand we are interested to apply our research to construct brain- machine interfaces to supplement human cognition, communication, and motor function. Examples for our work on decoding of cognitive states and our brain controlled grasping project can be found on the lab-webpage.

Biological psychology lab (Prof. Dr. Christiane Thiel)

Our research focuses on visuospatial and auditory attention, learning and plasticity, as well as the pharmacological modulation of such processes. The combination of pharmacological challenges with cognitive tasks in the context of functional neuroimaging (fMRI) studies is a powerful approach to directly assess pharmacological modulation of human brain activity. For example, we have performed several pharmacological fMRI studies showing a nicotinic modulation of visuospatial attention and shown that nicotine increases brain network efficiency. A long-term goal of such studies is to provide an experimental approach that has relevance to studying mechanisms of recovery and treatment effects in different patient populations.

Experimental psychology lab (Prof. Dr. Christoph Herrmann)

The lab is headed by Christoph Herrmann and focuses on physiological correlates of cognitive functions such as attention, memory and perception. The methods that are used comprise electroencephalography (EEG), magnetoencephalography (MEG), functional magnetic resonance imaging (fMRI), transcranial electric stimulation (TES), transcranial magnetic stimulation (TMS), eye-tracking, neural network simulations, and psychophysics. The focus of the research lies in the analysis of oscillatory brain mechanisms. Oscillatory brain activity is considered to be one of the electrophysiological correlates of cognitive functions. We analyse these brain oscillations in healthy and pathological conditions, simulate them for a better understanding and try to modulate them.

Psychological methods and statistics lab (Prof. Dr. Andrea Hildebrandt)

By applying and advancing multivariate statistical and psychometric modeling techniques, our research aims at better understanding individual differences in general cognitive functioning and social cognition. We develop and evaluate computerized test batteries rooted in experimental psychology for measuring human abilities and combine psychometric, neurometric (EEG, (f)MRI), molecular-genetic and hormonal assessments to investigate within- and between-person variations in cognition, emotion and personality. A special focus of our research is the processing of invariant and variant facial information – a basic domain of social cognition. We ask how are abilities in the social domain special as compared with cognitive processing in general. To this aim we investigate typically functioning individuals across the life span, including old age and pathological conditions. Beyond these goals, we enjoy contemplating about conceptual issues in psychological measurement.

Neuropsychology lab (Prof. Dr. Stefan Debener)

We use methods from experimental psychology and psychophysiology to study the relationship between the human brain and cognitive functions. One focus of our research is related to sensory deprivation and compensatory mechanisms. We study how hearing loss and deafness change the functional organization of the brain and what the consequences of these changes are for auditory rehabilitation. Related to this topic are studies investigating how information from different sensory modalities is combined to create a coherent percept of an object. Our key tool is high-density EEG, but we also use MEG, fMRI, and concurrent EEG-fMRI recordings. Because these tools provide us with complex, mixed signals that reflect different features of human brain function, we spend some time on the application and evaluation of signal un-mixing and signal integration procedures as well.

Neurocognition and functional neurorehabilitation group (Dr. Cornelia Kranczioch)

The research of the group is allocated at the intersection of neuropsychology and neurorehabilitation. In brief, we are interested in how the treatment of impairments resulting from central nervous disorders can benefit from neurocognitive approaches and theories. Our research currently focuses on using motor imagery, that is, the mental practice of movements, to support neurorehabilitation, for instance following stroke or in Parkinson’s disease. In close collaboration with the Neuropsychology lab we conduct studies in which we combine motor imagery training with lab-based or mobile neurofeedback setups. We run studies in healthy volunteers to learn more about the feasibility and the limitations of the neurofeedback applications. Just as important for the group is research aimed at learning more about motor imagery and motor cognition in the absence of neurofeedback. We strive to implement what we learn from these studies in our work with patients.
The second research focus of the group is the neurocognition of visual-temporal attention. Here we work mainly but not exclusively with RSVP paradigms such as the Attentional Blink. Among other things we compare brain activity (EEG, fMRI) in instances in which attention fails and in which it helps to successfully solve a task, or we study brain activity to better understand interindividual variations in task performance.

Neurophysiology of everyday life group (Dr. Martin Bleichner)

Unwanted sound, generally referred to as noise, is an environmental pollutant which may cause hearing loss. Additionally, noise also acts as an unspecific stressor with detrimental effects on biological and psychological processes: noise pollution has been associated with cardiovascular problems, sleep disturbance, and cognitive impairments. These harmful non-auditory effects of noise pollution typically only occur accumulated over time.

However, it is challenging to determine under which conditions environmental noise has adverse effects because whether a person perceives a sound as disturbing, annoying or stressful cannot be derived from the acoustic properties of the sound. Any particular sound, independent of its sound pressure level or other features, may be experienced as noise and, thus, can have negative consequences on well-being. Instead, how a sound is perceived depends on individual preferences, cognitive capacity, current occupation, and duration of exposure. Therefore, we need a perception based noise dosimetry that allows quantification of the perceived noise exposure for extended periods of time.

Recent developments in mobile electroencephalography (EEG) provide the possibility to study brain activity beyond the lab and thereby allow investigating how individuals perceive noise in everyday situations. Rather than monitoring the presence of noise, we can monitor the perceived noise exposure in the brain. In this research project, we want to use a combination of wireless EEG, concealed ear-centered electrode placement, and smartphone-based signal acquisition to study sound and noise perception in daily-life situations on an individual basis.

We will approach this topic in two parallel research lines. In the first research line, we will establish a relationship between EEG acquisition in the lab and in everyday situations. In the second research line, we will address individual noise perception and noise annoyance. On the one hand, we will work on overcoming the challenges involved in the acquisition and interpretation of EEG-signals that were acquired outside of the lab – this concerns signal artifacts and comparability to lab-based recordings. On the other hand, we will objectify the subjective noise disturbance in the lab and at the workplace. This takes place on three levels: subjective assessment, noise dosimetry and the recording of brain activity. Data obtained in the lab will be related to data obtained at the workplace. Our work will advance the field of mobile ear-centered EEG and will provide new insights on dealing with individual noise exposure.


Contents

All neuroimaging is considered part of brain mapping. Brain mapping can be conceived as a higher form of neuroimaging, producing brain images supplemented by the result of additional (imaging or non-imaging) data processing or analysis, such as maps projecting (measures of) behavior onto brain regions (see fMRI). One such map, called a connectogram, depicts cortical regions around a circle, organized by lobes. Concentric circles within the ring represent various common neurological measurements, such as cortical thickness or curvature. In the center of the circles, lines representing white matter fibers illustrate the connections between cortical regions, weighted by fractional anisotropy and strength of connection. [1] At higher resolutions brain maps are called connectomes. These maps incorporate individual neural connections in the brain and are often presented as wiring diagrams. [2]

Brain mapping techniques are constantly evolving, and rely on the development and refinement of image acquisition, representation, analysis, visualization and interpretation techniques. [3] Functional and structural neuroimaging are at the core of the mapping aspect of brain mapping.

Some scientists have criticized the brain image-based claims made in scientific journals and the popular press, like the discovery of "the part of the brain responsible" things like love or musical abilities or a specific memory. Many mapping techniques have a relatively low resolution, including hundreds of thousands of neurons in a single voxel. Many functions also involve multiple parts of the brain, meaning that this type of claim is probably both unverifiable with the equipment used, and generally based on an incorrect assumption about how brain functions are divided. It may be that most brain functions will only be described correctly after being measured with much more fine-grained measurements that look not at large regions but instead at a very large number of tiny individual brain circuits. Many of these studies also have technical problems like small sample size or poor equipment calibration which means they cannot be reproduced - considerations which are sometimes ignored to produce a sensational journal article or news headline. In some cases the brain mapping techniques are used for commercial purposes, lie detection, or medical diagnosis in ways which have not been scientifically validated. [4] [ page needed ]

In the late 1980s in the United States, the Institute of Medicine of the National Academy of Science was commissioned to establish a panel to investigate the value of integrating neuroscientific information across a variety of techniques. [5] [ page needed ]

Of specific interest is using structural and functional magnetic resonance imaging (fMRI), diffusion MRI (dMRI), magnetoencephalography (MEG), electroencephalography (EEG), positron emission tomography (PET), Near-infrared spectroscopy (NIRS) and other non-invasive scanning techniques to map anatomy, physiology, perfusion, function and phenotypes of the human brain. Both healthy and diseased brains may be mapped to study memory, learning, aging, and drug effects in various populations such as people with schizophrenia, autism, and clinical depression. This led to the establishment of the Human Brain Project. [6] [ page needed ] It may also be crucial to understanding traumatic brain injuries (as in the case of Phineas Gage) [7] and improving brain injury treatment. [8] [9]

Following a series of meetings, the International Consortium for Brain Mapping (ICBM) evolved. [10] [ page needed ] The ultimate goal is to develop flexible computational brain atlases.

The interactive and citizen science website Eyewire maps mices' retinal cells and was launched in 2012. In 2021, the most comprehensive 3D map of the human brain was published by an U.S. IT company. It shows neurons and their connections along with blood vessels and other components of a millionth of a brain. For the map, the 1 mm³ sized fragment was sliced into over 5 000 nanometers-thin pieces which were scanned with an electron microscope. The interactive map required 1.4 petabytes of storage-space. [12] [13]

Brain mapping is the study of the anatomy and function of the brain and spinal cord through the use of imaging (including intra-operative, microscopic, endoscopic and multi-modality imaging), immunohistochemistry, molecular & optogenetics, stem cell and cellular biology, engineering (material, electrical and biomedical), neurophysiology and nanotechnology.


Objective

Event-related potentials (ERPs) show promise as markers of neurocognitive dysfunction, but conventional recording procedures render measurement of many ERP-based neurometrics clinically impractical. The purpose of this work was (a) to develop a brief neurometric battery capable of eliciting a broad profile of ERPs in a single, clinically practical recording session, and (b) to evaluate the sensitivity of this neurometric profile to age-related changes in brain function.

Methods

Nested auditory stimuli were interleaved with visual stimuli to create a 20-min battery designed to elicit at least eight ERP components representing multiple sensory, perceptual, and cognitive processes (Frequency & Gap MMN, P50, P3, vMMN, C1, N2pc, and ERN). Data were recorded from 21 younger and 21 high-functioning older adults.

Results

Significant multivariate differences were observed between ERP profiles of younger and older adults. Metrics derived from ERP profiles could be used to classify individuals into age groups with a jackknifed classification accuracy of 78.6%.

Conclusions

Results support the utility of this design for neurometric profiling in clinical settings.

Significance

This study demonstrates a method for measuring a broad profile of ERP-based neurometrics in a single, brief recording session. These markers may be used individually or in combination to characterize/classify patterns of sensory and/or perceptual brain function in clinical populations.


What is a neurometric function? - Psychology

Associate Professor
Department of Psychology

Faculty Affiliate
Neuroscience
Applied Science

The psychophysiology of attention & cognitive control and assessment of cognitive deficits in clinical populations.

Dickter, C. & Kieffaber, P. D., (2013). EEG Methods in Psychological Science. Sage Publications, London.

Kieffaber, P.D., Hershaw, J., Sredl, J., West, R. (In Press) Electrophysiological Correlates of Error Initiation and Response Correction . Neuroimage.

Kieffaber, P.D., Cunningham, E., & Hershaw, J., Okhravi, H. R. (In Press) A brief neurometric battery for the assessment of age-related changes in cognitive function. Clinical Neurophysiology.

Brenner, C., Rumak, S. P., Burns, A., & Kieffaber, P. D. (2014) The Role of Encoding and Attention in Facial Emotion Memory: An EEG Investigation, International Journal of Psychophysiology, 93, 398-410.

* Oleynick, V. C., Thrash, T. M., LeFew, M. C., Moldovan, E. G., Kieffaber, P. D. (2014) The scientific study of inspiration in the creative process: Challenges and opportunities, Frontiers in Human Neuroscience, 8(436), 1-8.

West, R., Tiernan, B. N., Kieffaber, P. D., Bailey, K., & Anderson, S. (2014) The effects of age on the neural correlates of feedback processing in a naturalistic gambling game. Psychophysiology, 51(8), 734-745.

West, R., Bailey, K., Anderson, S. Kieffaber, P. (2014) Beyond the FRN: A Spatio-temporal Analysis of the Neural Correlates of Feedback Processing in a Virtual Blackjack Game. Brain and Cognition, 86, 104-115.

Dickter, C. L., Kieffaber, P. D., Kittel, J. A. & Forestell, C. A. (2013) Mu Suppression as an Indicator of Activation of the Perceptual-Motor System by Smoking-related Cues in Smokers, Psychophysiology, 50(7), 664-70

* Lindbergh, C. A., Kieffaber,P. D. (2013) The Neural Correlates of Temporal Judgments in the Duration Bisection Task. Neuropsychologia, 51(2), 191-196.

* Gayle, L. C., Gal, D., & Kieffaber, P. D. (2012) Measuring affective reactivity in individuals with autism spectrum personality traits using the visual mismatch negativity event-related brain potential. Frontiers in Human Neuroscience, 6(334), 1-7.

Kieffaber,P. D., Kruschke, J. K., Walker,P. M., Cho, R. Y., & Hetrick, W. P. (2012) The contributions of stimulus- and response-set to control and conflict in task-set switching. Journal of Experimental Psychology: Human Perception and Performance. Advance online publication. doi: 10.1037/a0029545

Kieffaber, P. D., Cho, R. Y. (2010) Induced cortical gamma-band oscillations reflect cognitive control elicited by implicit probability cues in the preparing to overcome prepotency (POP) task. Cognitive Affective & Behavioral Neuroscience, 10, 431-440.

Brenner, C.A., Kieffaber, P. D., Johannesen, J.K., O&rsquoDonnell, B. F., Hetrick, W. P. (2009) Event-related potential abnormalities in Schizophrenia: A failure to &rsquogate-in&rsquo salient information?, Schizophrenia Research, 113 (2-3),332-338.

* Paytner, C., Kieffaber, P. D., Reder, L.M. (2009) Knowing we know before we know: ERP correlates of initial feeling-of-knowing. Neuropsychologia, 47 (3), 796-803.

Reder, L. M., Park, H., & Kieffaber, P. D. (2009) Memory systems do not divide on consciousness: Reinterpreting memory in terms of activation and binding. Psychological Bulletin, 135 (1), 23-49.

Carroll, C., Kieffaber, P. D., Vohs, J.L., O&rsquoDonnell, B. F., Shekhar, A., Hetrick, W. P. (2008) Contributions of Spectral Frequency Analyses to the Study of P50 ERP Amplitude and Suppression in Bipolar Disorder With or Without a History of Psychosis. Bipolar Disorders, 10 (7), 776-787

Vohs, J.L., Hetrick, W. P., Kieffaber, P. D., Bodkins, M., Bismark, A., Shekhar, A., & O&rsquoDonnell, B. F. (2008), Visual event-related potentials in schizotypal personality disorder and schizophrenia. Journal of Abnormal Psychology, 117 (1), 119-131

Kieffaber, P. D., Marcoulides, G. A., White, M., & Harrington, D. E., (2007) Modeling the ecological validity of neurocognitive assessment in adults with acquired brain injury. Journal of Clinical Psychology in Medical Settings, 14 (3), 206-218

Kieffaber, P. D., O&rsquoDonnell, B. F., Shekhar, A., & Hetrick, W. P., (2007) Event-related brain potential evidence for preserved attentional set switching in schizophrenia. Schizophrenia Research, 93, 355-365.

Kieffaber, P. D., Kappenman, E., O&rsquoDonnell, B. F., Shekhar, A., Bodkins, M., & Hetrick, W. P. (2006) Shifting and maintenance of task set in schizophrenia. Schizophrenia Research, 84 (2-3), 345-358.

Kieffaber, P. D. & Hetrick, W. P. (2005). Event-related Potential Correlates of Task-switching and Switch Costs. Psychophysiology, 42, 56-71.

Johannesen, J. K., Kieffaber, P. D., O&rsquoDonnell, B. F., Shekhar, A., Evans, J. D., & Hetrick,, W. P. (2005). Contributions of subtype and spectral frequency analysis to the study of P50 ERP amplitude and suppression in schizophrenia. Schizophrenia Research, 78, 269-284.

Brown, S. M., Kieffaber, P. D, Vohs, J. L., Carroll, C. A., Tracy, J. A., Shekhar, A., O&rsquoDonnell, B. F., Steinmetz, J. E., & Hetrick, W. P. (2005). Eye-blink conditioning deficits indicate timing and cerebellar abnormalities in schizophrenia. Brain and Cognition, 58, 94-108.

Zirnheld, P. J., Carroll, C. A., Kieffaber, P. D., O&rsquoDonnell, B. F., Shekhar, S., & Hetrick, W. P. (2004). Haloperidol Impairs Learning and Error-Related Negativity (ERN) in Humans. Journal of Cognitive Neuroscience, 16 (6), 1098-1112.


Frequently Asked Questions

Currently, LINS neuropsychologists participate with Medicare, Workers' Compensation, and No Fault insurance. In addition, if a plan allows for out of network services, LINS may bill your insurance company for the cost of the evaluation. Prior to onset of services, our office manager will determine any patient financial obligation. Many patients choose to pay for the evaluation and to submit on their own to the insurance company for reimbursement. Our billing staff have over ten years of experience providing patients with the necessary paper work so they will be reimbursed.

Insurance Plans Accepted:

Will my insurance company pay for the evaluation?

Neuropsychological evaluations are often covered when there is a history of a medical or neurological condition (e.g. brain injury, seizure, loss of consciousness). Further, many insurance companies will cover some or all of an evaluation that is performed to help physicians to better understand cognitive change (e.g. the reason for decline in memory, attention, or problem solving skills). Prior to onset of services, our office manager will contact your insurance company and provide you with information regarding your plan's coverage.

What is a neuropsychological examination?

The Neuropsychological examination is one of the methods of diagnosing neurodevelopmental, neurodegenerative and acquired disorders of brain function. It is frequently a part of the overall neurodiagnostic assessment which includes other neurometric techniques such as CT, MRI, EEG, SPECT. The purpose of the neuropsychological examination is to assess the clinical relationship between the brain/central nervous system and behavioral dysfunction. It is a neurodiagnostic, consultative service and NOT a mental health/psychological evaluation or psychiatric treatment service.

The Social Security Administration defines neuropsychological testing as the &ldquoadministration of standardized tests that are reliable and valid with respect to assessing impairment in brain functioning.&rdquo The examination is performed by a qualified neuropsychologist who has undergone specialized education and intensive training in the clinical and neuroanatomy, neurology, and neurophysiology. The neuropsychologist works closely with the primary or consultant physician in assessing patient cerebral status. Neuropsychological services are designated as &ldquomedicine, diagnostic&rdquo by the federal Health Care and Financing Administration (HCFA), are subsumed under &ldquoCentral Nervous System Assessments&rdquo in the 1996 CPT Code Book, and have corresponding ICD diagnoses.

Neuropsychological examinations are clinically indicated and medically necessary when patients display signs and symptoms in intellectual compromise, cognitive and/or neurobehavioral dysfunction that involve, but are not restricted to, memory deficits, language disorders, impairment of organization and planning, difficulty with cognition, and perceptual abnormalities. Frequent etiologies include: head trauma, stroke, tumor, infectious disease, toxic exposure metabolic abnormalities, autoimmune disease, genetic defects, learning disabilities, and neurodegenerative disease. The examination entails the taking of and extensive history (including review of medical records) and the administration of a comprehensive battery of tests that can take many hours and requires intensive data analysis. Consultation with other medical professionals in common, such as neurologists, neurosurgeons, and radiologists. The sensitivity of neuropsychological tests is such that they often reveal abnormality in the absence of positive findings on CT and MR scan.

What is a neuropsychologist?

A clinical neuropsychologist is a professional within the field of psychology with special expertise in the applied science of brain-behavior relationships. Clinical neuropsychologists use this knowledge in the assessment, diagnosis, treatment, and/or rehabilitation of patients across the lifespan with neurological, medical, neurodevelopmental and psychiatric conditions, as well as other cognitive and learning disorders. The clinical neuropsychologist uses psychological, neurological, cognitive, behavioral, and physiological principles, techniques and tests to evaluate patients&rsquo neurocognitive, behavioral, and emotional strengths and weaknesses and their relationship to normal and abnormal central nervous system functioning. The clinical neuropsychologist uses this information and information provided by other medical/healthcare providers to identify and diagnose neurobehavioral disorders, and plan and implement intervention strategies. The specialty of clinical neuropsychology is recognized by the American Psychological Association and the Canadian Psychological Association. Clinical neuropsychologists are independent practitioners (healthcare providers) of clinical neuropsychology and psychology. The clinical neuropsychologist (minimal criteria) has: 1. A doctoral degree in psychology from an accredited university training program. 2. An internship, or its equivalent, in a clinically relevant area of professional psychology. 3. The equivalent of two (fulltime) years of experience and specialized training, at least one of which is at the post-doctoral level, in the study and practice of clinical neuropsychology and related neurosciences. These two years include supervision by a clinical neuropsychologist1 . 4. A license in his or her state or province to practice psychology and/or clinical neuropsychology independently, or is employed as a neuropsychologist by an exempt agency.

*The above definition is provided by the National Academy of Neuropsychology. Additional information can be found on the NAN website.

Why have I been referred?

Neuropsychological evaluations are requested specifically to help your doctors, teachers, school psychologist, or other professionals understand how the different areas and systems of the brain are working. Testing is usually recommended when there are symptoms or complaints involving memory or thinking. This can be signaled by a change in concentration, organization, reasoning, memory, language, perception, coordination, or personality. The changes may be due to any of a number of medical, neurological, psychological, or genetic causes.


Analysis of neural activity

Spike density functions were obtained by convolving the spike train with a function resembling a postsynaptic potential R(t) = [1 − exp(−tg)]·[exp(−td)], where τg is the time constant for the growth phase, and τd is the time constant for the decay phase. Physiological data from excitatory synapses indicate that 1 and 20 ms are optimum values for τg and τd, respectively (Sayer et al. 1990).

The average firing rate in cancelled stop-signal trials was compared with that in latency-matched no-stop-signal trials and noncancelled stop-signal trials as a function of time from the target presentation. To perform this time-course analysis, we subtracted the spike density function during cancelled stop-signal trials from the average spike density function during either latency-matched no-stop-signal trials or noncancelled stop-signal trials. The resulting spike density function is referred to as the differential spike density function. The time at which activity in the two conditions, when saccades were produced and when saccades were cancelled, began to diverge was defined as the instant when the differential spike density function exceeded 2 SDs of the difference in activity over the 200-ms interval before the target presentation, provided that this differential spike density function reached 6 SD and remained >2 SD for 50 ms.

The time of modulation of neurons was also determined using a receiver operating characteristic (ROC) analysis (Green and Swets 1966) as described elsewhere (Murthy et al. 2007, 2009). The spike density function from a set of at least five cancelled stop-signal trials was compared with the spike density function from a set of at least five either noncancelled stop-signal trials or latency-matched no-stop-signal trials. Spike trains from the original sets of trials were bootstrapped to construct 500 simulated spike trains in each set for reliable comparison. A simulated spike train was constructed by randomly selecting one trial from the set of original trials at every 1 ms time bin. If a spike occurred in that trial at that instant, the spike was added to the simulated spike train. Comparisons were conducted by calculating ROC curves for successive 1-ms bins starting at the time of target presentation and continuing until all saccades were initiated during one set of trials. The area under the ROC curve provides a quantitative measure of the separation between two distributions of activity. An area under the ROC curve value of 0.5 signifies that the two distributions are completely overlapped, whereas an extreme value of 0.0 or 1.0 signifies that the two distributions do not overlap. To describe the growth in the area under the ROC curve over time, the data were fit with a cumulative Weibull distribution function of the form W(t) = γ − (γ − δ)·exp[−(t/α) β ], where t is the time that ranges from when the area under the ROC curve attains its minimum to when the area under the ROC curve reaches its maximum. α is the time at which the area under the ROC curve reached the sum of 63.2% of its maximum value γ and 36.8% of its minimum value δ, β is the slope. The time of differential activity was determined from the growth of the ROC area over time and is defined as the time when the ROC area reached a value of 0.7.


Discussion

In the present study, we revisited optimal population coding using Bayesian ideal observer analysis in both the reconstruction and the discrimination paradigm. Both lead to very similar conclusions with regard to the optimal tuning width (Fig. 2 B and C) and the optimal noise correlation structure (Fig. 4 C and D). Importantly, the signal-to-noise ratio—which is critically limited by the available decoding time—plays a crucial role for the relative performance of different coding schemes: Population codes well suited for long intervals may be severely suboptimal for short ones. In contrast, Fisher information is largely ignorant of the limitations imposed by the available decoding time—codes that are favorable for long integration intervals seem favorable for short ones as well.

Whereas Fisher information yields an accurate approximation of the ideal observer performance in the limit of long decoding time windows, this is not necessarily true in the limit of large populations. We showed analytically that the ideal observer error for a population with Fisher-optimal tuning functions does not decay to zero in the limit of a large number of neurons but saturates at a value determined solely by the available decoding time (Fig. 3C). In contrast, Fisher information predicts that the error scales like the inverse of the population size, independent of time (Fig. 3B). Thus, the “folk theorem” that Fisher information provides an accurate assessment of coding quality in the limit of large population size is correct only if the width of the tuning functions is not optimized as the population grows.

In the discrimination task, we explained this behavior by showing that the coarse discrimination error is independent of the population size for ensembles with Fisher-optimal tuning curves. In the reconstruction task, large estimation errors play a similar role to the coarse discrimination error. The convergence of the reconstruction error to a normal distribution with variance equal to the inverse Fisher information relies on a linear approximation of the derivative of the log-likelihood (19). If the tuning function width scales with population size—as it does if the tuning functions are optimized for Fisher information—the quality of this linear approximation does not improve with increasing population size because the curvature of the tuning functions is directly coupled to the tuning width. As a consequence, the Cramér–Rao bound from Eq. 1 is not tight even asymptotically leading to the observed discrepancies between Fisher information and the MMSE.

Similarly, Fisher information also fails to evaluate the ideal observer performance for different noise correlation structures correctly when the time available for decoding is short. The reason is that the link between Fisher information and the optimal reconstruction or discrimination error also relies on the central limit theorem (4, 18, 19). Therefore, in the presence of noise correlations, the approximation of the ideal observer error obtained from Fisher information can converge very slowly or not at all to the true error for increasing population size, because the observations gathered from different neurons are no longer independent. In fact, our results show that it is crucial not to rely on the asymptotic approach of Fisher information alone to determine the relative quality of different correlation structures.

In contrast to our study, earlier studies using the discrimination framework mostly measured the minimal linear discrimination error or computed the fine discrimination error only (4, 13, 27–29). Two other studies used upper bounds on the MDE, which are tighter than the minimal linear discrimination error (13, 30), but no study has so far computed the exact MDE for the full range of the neurometric function. For a detailed discussion of these studies see SI Discussion. Information theoretic approaches provide a third framework for evaluating neural population codes in addition to the reconstruction and discrimination framework studied here. For example, stimulus-specific information (SSI) has been used to assess the role of the noise level for population coding in small populations (31) and in the asymptotic regime, SSI and Fisher information seem to yield qualitatively similar results (32). In contrast to neurometric function analysis, information theoretic approaches are not directly linked to a behavioral task.

In conclusion, neurometric function analysis offers a tractable and intuitive framework for the analysis of neural population coding with an exact ideal observer model. It is particularly well suited for a comparison of theoretical assessment of different population codes with results from psychophysical or neurophysiological measurements, as the two-alternative forced choice orientation discrimination task is much studied in many neurophysiological and psychophysical investigations in humans and monkeys (33, 34). In contrast to Fisher information, neurometric functions are informative not only about fine, but also about coarse discrimination performance. For example, two codes with the same Fisher information may even yield different neurometric functions (Fig. S6). Our results suggest that the validity of the conclusions based on Fisher information depends on the coding scenario being investigated: If the parameter of interest induces changes that either impair or improve both fine and coarse discrimination performance (e.g., when studying the effect of population size for fixed, wide tuning functions), Fisher information is a valuable tool for assessing different coding schemes. If, however, fine discrimination performance can be improved at the cost of coarse discrimination performance (as is the case with tuning width), optimization of Fisher information will impair the average performance of the population codes. In this case, quite different population codes are optimal than those inferred from Fisher information.


What is all the noise about in interval timing?

Cognitive processes such as decision-making, rate calculation and planning require an accurate estimation of durations in the supra-second range—interval timing. In addition to being accurate, interval timing is scale invariant: the time-estimation errors are proportional to the estimated duration. The origin and mechanisms of this fundamental property are unknown. We discuss the computational properties of a circuit consisting of a large number of (input) neural oscillators projecting on a small number of (output) coincidence detector neurons, which allows time to be coded by the pattern of coincidental activation of its inputs. We showed analytically and checked numerically that time-scale invariance emerges from the neural noise. In particular, we found that errors or noise during storing or retrieving information regarding the memorized criterion time produce symmetric, Gaussian-like output whose width increases linearly with the criterion time. In contrast, frequency variability produces an asymmetric, long-tailed Gaussian-like output, that also obeys scale invariant property. In this architecture, time-scale invariance depends neither on the details of the input population, nor on the distribution probability of noise.

1. Introduction

The perception and use of durations in the seconds-to-hours range (interval timing) is essential for survival and adaptation, and is critical for fundamental cognitive processes such as decision-making, rate calculation and planning of action [1]. The classic interval timing paradigm is the fixed-interval (FI) procedure in which a subject's behaviour is reinforced for the first response (e.g. lever press) made after a pre-programmed interval has elapsed since the previous reinforcement. Subjects trained on the FI procedure typically start responding after a fixed proportion of the interval has elapsed despite the absence of any external time cues. A widely used discrete-trial variant of FI procedure is the peak-interval (PI) procedure [2,3]. In the PI procedure, a stimulus such as a tone or light is turned on to signal the beginning of the to-be-timed interval and in a proportion of trials the subject's first response after the criterion time is reinforced. In the remainder of the trials, known as probe trials, no reinforcement is given, and the stimulus remains on for about three times the criterion time. The mean response rate over a very large number of trials has a Gaussian shape whose peak measures the accuracy of criterion time estimation and the spread of the timing function measures its precision. In the vast majority of species, protocols and manipulations to date, interval timing is both accurate and time-scale invariant, i.e. time-estimation errors increase linearly with the estimated duration [4–7] (figure 1). Accurate and time-scale invariant interval timing was observed in many species [1,4] from invertebrates to fish, birds and mammals such as rats [8] (figure 1a), mice [11] and humans [9] (figure 1b). Time-scale invariance is stable over behavioural (figure 1b), lesion [12], pharmacological [13,14] (figure 1c) and neurophysiological manipulations [10] (figure 1d).

Figure 1. Accurate and time-scale invariant interval timing. (a) The response rate of rats timing a 30 s (left) or 90 s interval (right) overlap (centre) when the vertical axis is normalized by the maximum response rate and the horizontal axis by the corresponding criterion time redrawn from [8]. (b) Time-scale invariance in human subjects for 8 and 21 s criteria redrawn from [9]. (c) Systemic cocaine (COC) administration speeds-up timing proportional (scalar) to the original criteria, 30 and 90 s redrawn from [8]. (d) The hemodynamic response associated with a subject's active time reproduction scales with the timed criterion, 11 versus 17 s redrawn from [10]. An important feature of the output function is its asymmetry, which is clearly visible in (c). Although all output functions have a Gaussian-like shape they also present a long tail. (Online version in colour.)

One of the most influential interval timing paradigms assumes a pacemaker–accumulator clock (pacemaker-counter) and was introduced by Treisman [15]. According to Treisman [15], the interval timing mechanism that links internal clock to external behaviour also requires some kind of store of reference times and some comparison mechanism for time judgement. The model was rediscovered two decades later and became the scalar expectancy theory (SET) [5,16]. SET also assumes that interval timing emerges from the interaction of three abstract blocks: clock, accumulator (working or short-term memory) and comparator. The clock stage is a Poisson process whose pulses are accumulated in the working memory until the occurrence of an important event, such as reinforcement. At the time of the reinforcement, the number of clock pulses accumulated is transferred from the working (short-term) memory and stored in a reference (or long-term) memory. According to the SET, a response is produced by computing the ratio between the value stored in the reference memory and the current accumulator total. To account for the scalar property of interval timing, i.e. the variability of responses is roughly proportional to the peak time, Gibbon [17] showed that a Poisson distribution for the accumulator requires a time-dependent variance in the 'decision and memory factors as well as in the internal clock. These additional sources will be seen to dominate overall variance in performance’ (p. 191), emphasizing the important role of cognitive systems in time judgements. For such reasons, SET was considered more a general theory of animal cognition than strictly a theory of animal timing behaviour [18].

Another influential interval timing model is the behavioural timing (BeT) theory [19,20]. BeT assumes a ‘clock’ consisting of a fixed sequence of states with the transition from one state to the next driven by a Poisson pacemaker. Each state is associated with different classes of behaviour, and the theory claims these behaviours serve as discriminative stimuli that set the occasion for appropriate operant responses (although there is not a one-to-one correspondence between a state and a class of behaviours). The added assumption that pacemaker rate varies directly with reinforcement rate allows the model to handle some experimental results not covered by SET, although it has failed some specific tests (see [21] for a review).

A handful of neurobiologically inspired models explain accurate timing and time-scale invariance as a property of the information flow in the neural circuits [22,23]. Buonomano & Merzenich [24] implemented a neural network model with randomly connected circuits representing cortical layers 4 and 3 in order to mimic the temporal-discrimination task in the tens to hundreds of milliseconds range. Durstewitz hypothesized that the climbing rate of activity observed experimentally, e.g. from thalamic neurons recordings [25], may be involved in timing tasks [26]. Durstewitz [26] used a single-cell computational model with a calcium-mediated feedback loop that self-organizes into a biophysical configuration which generates climbing activity. Leon & Shadlen [27] suggested that the scalar timing of subsecond intervals may also be addressed at the level of single neurons, though how such a mechanism accounts for timing of supra-second durations is unclear. A solution to this problem was offered by Killeen & Taylor [28] who explained timing in terms of information transfer between noisy counters, although the biological mechanisms were not addressed.

Population clock models of timing are based on the repeatable patterns of activity of a large neural network that allow identification of elapsed time based on a ‘snapshot’ of neural activity [29,30]. In all population clock models, timing is an emergent property of the network in the sense that it relies on the interaction between neurons to produce accurate timing over a time-scale that far exceeds the longest firing period of any individual neuron. The first population clock model was proposed by Mauk and co-workers [7,31,32] in the context of the cerebellum. Such models consist of possible multiple layers of recurrently connected neural networks, i.e. networks of all-to-all coupled neurons that make it possible for a neuron to indirectly feedback onto itself [30]. Depending on the coupling strengths, the recurrent neural networks can self-maintain reproducible dynamic patterns of activity in response to a certain input. Such autonomous and reproducible patterns of neural activity could offer a reliable model for timing. Another advantage of the population clock models is that for weak couplings the network cannot self-maintain reproducible patterns of activity but instead produces input-dependent patterns of activity. Such a model was recently proposed for sensory timing [30]. Similar firing rate models were used by Itskov et al. [33] to design a large recurrently connected neural network that produced precise interval timing. By balancing the contribution of the deterministic and stochastic coupling strengths they showed that the first layer of such a population clock model can produce either a reproducible pattern of activity (associated with a timing ‘task’) or desynchronized pattern of activity that cannot keep track of long time-intervals (‘home cage’) [33]. The rate model of Itskov et al. [33] was also capable of extracting accurate interval timing information from a second layer with no recurrent excitation and only a global, non-specific recurrent inhibition. The second layer was driven by both the output of the previous layer (through sparse and random connections) and noise [33].

Finally, a quite different solution was offered by Meck and co-workers [4,34] (figure 2a), who proposed the striatal beat frequency (SBF) in which timing is coded by the coincidental activation of neurons, which produces firing beats with periods spanning a much wider range of durations than single neurons [35]. As Matell & Meck [34] suggested, the interval timing could be the product of multiple and complementary mechanisms. They suggested that the same neuroantomical structure could use different mechanisms for interval timing.

Figure 2. The neurobiological structures involved in interval timing and the corresponding simplified SBF architecture. (a) Schematic of some neurobiological structures involved in interval timing. The colour-coded connectivities among different areas emphasize appropriate neuromodulatory pathways. The two main areas involved in interval timing are frontal cortex and basal ganglia. (b) In our implementation of the SBF model, the states of the Nin cortical oscillators (input neurons) at reinforcement time T are stored in the reference memory as a set of weights wi. During test trials, the working memory stores the state of FC oscillators vi(t) and, together with the reference memory, projects its content onto Nout spiny (output) neurons of the BG. FC, frontal cortex MC, motor cortex BG, basal ganglia TH, thalamus. GPE, globus pallidus external GPI, globus pallidus internal STn, subthalamic nucleus SNc/r, substantia nigra pars compacta/reticulata VTA, ventral tegmental area Glu, glutamate DA, dopamine GABA, gamma-aminobutyric acid ACh, acetylcholine. (Online version in colour.)

Here, we showed analytically that in the context of the proposed SBF neural circuitry, time-scale invariance emerges naturally from variability (noise) in models' parameters. We also showed that time-scale invariance is independent of both the type of the input neuron and the probability distribution or the sources of the noise. We found that the criterion time noise produces a symmetric Gaussian output that obeys scalar property. On the other hand, the frequency noise produces an asymmetric Gaussian-like output with a long tail that also obeys scalar property.

2. The striatal beat frequency model

(a) Neurobiological justification of a striatal beat frequency model

Our paradigm for interval timing is inspired by the SBF model [4,34], which assumes that durations are coded by the coincidental activation of a large number of cortical (input) neurons projecting onto spiny (output) neurons in the striatum that selectively respond to particular reinforced patterns [36–38] (figure 2a).

(i) Neural oscillators

A key assumption of the SBF model is the existence of a set of neural oscillators able to provide the time base for the interval timing network. There is strong experimental evidence that oscillatory activity is a hallmark of neuronal activity in various brain regions, including the olfactory bulb [39–41], thalamus [42,43], hippocampus [44,45] and neocortex [46]. Cortical oscillators in the alpha band (8–12 Hz [47,48]) were previously considered as pacemakers for temporal accumulation [49], as they reset upon occurrence of the to-be-remembered stimuli [50]. In the SBF model, the neural oscillators are loosely associated with the frontal cortex (FC figure 2a).

(ii) Working and long-term memories

Among the potential areas involved in storing brain's states related to salient features of stimuli in interval timing trials are the hippocampus (see [51] and references therein) and the striatum, which we mimic in our simplified neural circuitry (figure 2a).

(iii) Coincidence detection with spiny neurons

Support for the involvement of the striato-frontal dopaminergic system in timing comes from imaging studies in humans [52–55], lesion studies in humans and rodents [56,57], and drug studies in rodents [58,59] all pointing towards the basal ganglia (BG) as having a central role in interval timing (see also [60] and references therein). Striatal firing patterns are peak-shaped around a trained criterion time, a pattern consistent with substantial striatal involvement in interval timing processes [61]. Lesions of striatum result in deficiencies in both temporal-production and temporal-discrimination procedures [62]. There are also neurophysiological evidences that striatum can engage reinforcement learning to perform pattern comparisons (reviewed by Sutton & Barto [63]). Another reason we ascribed the coincidence detection to medium spiny neurons is due to their bistable property that permits selective filtering of incoming information [64,65]. Each striatal spiny neuron integrates a very large number of afferents (between 10 000 and 30 000) [36,37,65], of which a vast majority (≈ 72%) are cortical [47,66].

(iv) Biological noise and network activity

The activity of any biological neural network is inevitably affected by different sources of noise, e.g. channel gating fluctuations [67,68], noisy synaptic transmission [69] and background network activity [70–72]. Single-cell recordings support the hypothesis that irregular firing in cortical interneurons is determined by the intrinsic stochastic properties (channel noise) of individual neurons [73,74]. At the same time, fluctuations in the presynaptic currents that drive cortical spiking neurons have a significant contribution to the large variability of the interspike intervals [75,76]. For example, in spinal neurons, synaptic noise alone fully accounts for output variability [75]. Additional variability affects either the storage (writing) or retrieval (reading) of criterion time to or from the memory [77,78]. Another source of criterion time variability comes from considerations of how animals are trained [79,80]. In this paper, we were not concerned with the biophysical mechanisms that generated irregular firing of cortical oscillators nor did we investigate how reading/writing errors of criterion time happened. We rather investigated whether the above assumed variabilities in the SBF model's parameters can produce accurate and time-scale invariant interval timing.

(b) Numerical implementation of a striatal beat frequency model

(i) Neural oscillators

Neurons that produce stable membrane potential oscillations are mathematically described as limit cycle oscillators, i.e. they pose a closed and stable phase space trajectory [81]. Because the oscillations repeat identically, it is often convenient to map the high-dimensional space of periodic oscillators using a phase variable that continuously covers the interval (0, 2π). Phase oscillator models have a series of advantages: (i) they provide analytical insights into the response of complex networks (ii) any neural oscillator can be reduced to a phase oscillator near a bifurcation point [82] and (iii) they allow numerical checks in a reasonable time. All neurons operate near a bifurcation, i.e. a point past which the neuron produces large membrane potential excursions—called action potentials [81].

In this SBF-sin implementation, the cortical neurons, presumably localized in the FC (figure 2a), are represented by Nin (input) phase oscillators with intrinsic frequencies fi(i = 1, , Nin) uniformly distributed over the interval (fmin, fmax), projecting onto Nout (output) spiny neurons [34] (figure 2b). A sine wave is the simplest possible phase oscillator that mimics periodic transitions between hyperpolarized and depolarized states observed in single-cell recordings. For analytical purposes, the membrane potential of the ith cortical neuron was approximated by a sine wave vi(t) = acos(2πfit), where a is the amplitude of oscillations. We also implemented an SBF-ML network in which the input neurons are conductance-based Morris–Lecar (ML) model neurons with two state variables: membrane potential and a slowly varying potassium conductance [83,84] (see electronic supplementary material, section A for detailed model equations).

(ii) Working and long-term memories

The memory of the criterion time T is numerically modelled by the set of state parameters (or weights) wij that characterize the state of cortical oscillator i during the FI trial j. In our implementation of the noiseless SBF-sin model, the weights , where Tj is the stored value of the criterion time T in the FI trial j. The state of FC oscillators i at the reinforcement time Tj was implemented as the normalized average over all memorized values Tj of the criterion time: where we used norm = such that the normalized weight is bounded |wi| ≤ 1 (figure 2b). We found no difference between the response of the SBF model with the above weights or the positively defined weight .

(iii) Coincidence detection with spiny neurons

The comparison between a stored representation of an event, e.g. the set of the states of cortical oscillators at the reinforcement (criterion) time wi, and the current state vi(t) of the same cortical oscillators during the ongoing test trial is believed to be distributed over many areas of the brain [85]. Based on neurobiological data, in our implementation of the striato-cortical interval timing network, we have a ratio of 1000 : 1 between the input (cortical) oscillators and output (spiny) neurons in the BG (figure 2b). The output neurons, which mimic the spiny neurons in the BG, act as coincidence detectors: they fire when the pattern of activity (or the state of cortical oscillators) wi(t) at the current time t matches the memorized reference weights wi. Numerically, the coincidence detection was modelled using the product of the two sets of weights:

The purpose of the coincidence detection given by equation (2.1) is to implement a rule that produces a strong output when the two vectors wi and vi(t) coincide and a weaker responses when they are dissimilar. Although there are many choices, such as sigmoidal functions (which involve numerically expensive calculations owing to exponential functions involved), we opted for implementing the simplest possible rule that would fulfil the above requirement, i.e. the dot product of the vectors wi and vi(t). Without reducing the generality of our approach, and in agreement with experimental findings [66], for analytical analyses, we considered only one output neuron (Nout = 1) in equation (2.1).

(iv) Biological noise and network activity

Two sources of variability (noise) were considered in this SBF implementation. (i) Frequency variability, which was modelled by allowing the intrinsic frequencies fi to fluctuate according to a specified probability density function pdff, e.g. Gaussian, Poisson, etc. Computationally, the noise in the firing frequency of the respective neurons was introduced by varying either the frequency, fi (in the SBF-sin implementation), or the bias current Ibias required to bring the ML neuron to the excitability threshold (in the SBF-ML implementation). (ii) Memory variability was modelled by allowing the criterion time T to be randomly distributed according to a probability density function pdfT.

3. Results

(a) No time-scale invariance in a noiseless striatal beat frequency model

In the absence of noise (variability) in the SBF-sin model, the output given by equation (2.1) for Nout = 1 is (see electronic supplementary material, section B for detailed calculations):

The width, σout, of the output function is determined from the condition that the output function amplitude at t = T + σout/2, i.e. out(t = T + σout/2), is half of its maximum possible amplitude, i.e. 1/2out(t = T). Based on equation (3.1), we predicted theoretically that in the absence of noise σout is independent of the criterion time and violates time-scale invariance (see electronic supplementary material, section B for detailed calculations).

To numerically verify the above predictions, the envelope of the output function of a noise-free SBF-sin model was fitted with a Gaussian whose mean and standard deviations were contrasted against the theoretically predicted values (figure 3a). The width of the envelope is constant regardless of the criterion time and it matches the theoretical prediction.

Figure 3. A noise-free SBF model does not produce time-scale invariance. Numerically generated output of a noise-free SBF-sin (a) and SBF-ML (b) model with Nin = 1000 for T = 10 and T = 30 s. As predicted, the width the output function of any noise-free SBF model is independent of the criterion time. The Gaussian envelopes are also shown with continuous line (for T = 10 s) and dashed line (for T = 30 s). (Online version in colour.)

The above result regarding the emergence of time-scale property from noise in the SBF-sin model can extend to any type of input neuron. Indeed, according to Fourier's theory, any periodic train of action potentials can be decomposed into discrete sine-wave components. It results that irrespective of the type of input neuron, a noise-free SBF model cannot produce time-scale invariant outputs. We verified this prediction by replacing the sine-wave oscillator inputs with biophysically realistic noise-free ML neurons (figure 3b). Numerical simulations confirmed that the envelope of the output function of the SBF-ML model can be reasonably fitted by a Gaussian (see [48,86,87]), but the width of the Gaussian output does not increase with the timed interval (figure 3b), thus violating the time-scale invariance (scalar property).

(b) Time-scale invariance emerges from criterion time noise in the striatal beat frequency model

Many sources of noise (variability) may affect the functioning of an interval timing network, such as small fluctuations in the intrinsic frequencies of the inputs, and in the encoding and retrieving the weights wi(T) by the output neuron(s) [34,35,86–88]. Here, we showed analytically that one noise source is sufficient to produce time-scale invariance [34,48]. Without compromising generality, in the following, we examined the role of the variability in encoding and retrieval of the criterion time by the output neuron(s). The cumulative effect of all noise sources (trial-to-trial variability, neuromodulatory inputs, etc.) on the memorized weights wi was modelled by the stochastic variable Tj distributed around T according to a given pdfT. For Nout = 1, the output function given by equation (2.1) becomes (see electronic supplementary material, section C for detailed calculations):

(c) Particular case: infinite frequency range and time-scale invariance in the presence of Gaussian noise affecting the memorized criterion time

Although we already showed that the output function for the SBF-sin model and arbitrary pdfT for the criterion time noise is always Gaussian, produces accurate interval timing and obeys scalar property, it is illuminating to grasp the meaning of the theoretical coefficients in our general result by investigating a biologically relevant particular case. If the criterion time is affected by a Gaussian noise with zero mean and standard deviation σT, then one can show that (see electronic supplementary material, section D for detailed calculations), in the limit of a very large pool (theoretically infinite) of inputs, the output function of the SBF-sin model is

The output function given by equation (3.3) with the physically realizable term centred at t = T: (i) has a Gaussian (as predicted by the central limit theorem), (ii) peaks at t = T, i.e. produces accurate timing and (iii) has a standard deviation

(d) Particular case: finite frequency range and time-scale invariance in the presence of Gaussian noise affecting the memorized criterion time

In our previous numerical implementations of the SBF model [48,86,87], the frequency range was finite and coincides with α band (8–12 Hz). Is the SBF model still performing accurate and scalar interval timing under such a strong restriction? For a finite range of frequencies (fmin < f < fmax) with a very large number of FC oscillators Nin, a more realistic estimation of the output function from equation (3.2) is (see electronic supplementary material, section E for detailed calculations):

We used the SBF-sin implementation to numerically verify our theoretical prediction that σout = f over multiple trials (runs) of this type of stochastic process and for different values of T. The output functions (see continuous lines in figure 4a) for T = 10 s and T = 30 s are reasonably fitted by Gaussian curves. Our numerical results show a linear relationship between σout of the Gaussian fit of the output and T. We found that the resultant slope of this linear relationship matched the theoretical prediction given by σout = f. For example, for σT = 10% the average slope was 11.3% ± 4.5% with a coefficient of determination of R 2 = 0.93, p < 10 4 . We also found that for the SBF-ML the width of the Gaussian envelope increases linearly with the criterion time (figure 4b). For example, figure 4c shows the slope of the standard deviation σout versus criterion time for different values of the standard deviation of the Gaussian noise. Figure 4c shows not only that the scalar property is valid, but it also shows that as we predicted theoretically. Indeed, for σT = 0.05 the numerically estimated proportionality constant is 0.068 (filled squares in figure 4c, R 2 = 0.97) for σT = 0.1 the slope is 0.129 (filled circles in figure 4c, R 2 = 0.96) and for σT = 0.2 the slope is 0.25 (filled triangles in figure 4c, R 2 = 0.96).

Figure 4. Time-scale invariance emerges from criterion time noise in the SBF model. (a) Time-scale invariance emerges spontaneously in a noisy SBF-sin model here, the two criteria are T = 10 and T = 30 s. The output functions (thin continuous lines) were fitted with Gaussian curves (thick continuous line for T = 10 s and dashed line for T = 30 s) in order to estimate the position of the peak and the width of the output function. In an SBF-sin model, the standard deviation of the output function increases linearly with the criterion time. (b) In an SBF-ML implementation, the output function still has a Gaussian shape (owing to central limit theorem) and its width increases with criterion time. (c) Numerical simulations confirm that the standard deviation of the output function σout increases linearly with the criterion time T, which is the hallmark of time-scale invariance. Furthermore, for Gaussian memory variance we also found that σout is proportional to the standard deviation of the noise σT. (Online version in colour.)

(e) Time-scale invariance emerges from frequency variance during probe trials in the striatal beat frequency model

In addition to memory variance, frequency fluctuations owing to stochastic channel noise or background networks activity has received considerable attention. Here, we considered only frequency variability during the probe trial and assumed that there was no frequency variability during the FI procedure while the weights wi were memorized. We also assumed that there is no variability in the memorized criterion time, because its effect on interval timing was already addressed in §3d.

The cumulative effect of all noise sources on the firing frequencies during the probe trials was modelled by the stochastic variable fij distributed around the frequency fi according to a given pdff. Based on equation (2.1) with Nout = 1, the output function term centred around t = T becomes (see electronic supplementary material, section F for detailed calculations):

Based on the central limit theorem, the output function given by equation (3.7), which is the sum of a (very) large number Ntrials of stochastic variables fij, is always a Gaussian, regardless of the pdff. We used the average value of the stochastic equation (3.7) to estimate the output function and found that (see electronic supplementary material, section F for detailed calculations) it is always: (i) Gaussian (based on the central limit theorem), (ii) peaks at t0f = T/(1 + γf) ≈ T and (iii) has a standard deviation σout that increases linearly with the criterion time T

(f) Particular case: infinite frequency range and time-scale invariance in the presence of Gaussian noise affecting oscillators’ frequencies during probe trials

As in §2e, we used a Gaussian distribution pdff to explicitly compute the theoretical coefficients in the above general result. Briefly, by replacing the stochastic frequencies fij with an appropriate Gaussian distribution fi(1 + Gauss(0, σf)j), we found that the output function is (see electronic supplementary material, section G for detailed calculations):

which looks like a Gaussian with a very long tail (figure 5a) and peaks at . The skewness of the output function increases with the standard deviation of the frequency noise σf. For t < t0f, the half-width Δτ1 increases with the standard deviation of the frequency noise σf, although at a much slower rate than Δτ2 for t > t0f (figure 5b). This fact is reflected in a faster than linear increase of the Δτ2τ1 against σf (figure 5c). The quadratic fit over the entire σf [0, 1] shown in figure 5c is given by Δτ2τ1 = (0.902 ± 0.007) + (3.74 ± 0.03)σf + (−1.27 ± 0.03)σf 2 with an adjusted R 2 = 0.999. For reasonable standard deviation of the frequency noise σf < 0.5, we found that with an adjusted R 2 = 0.999. As the output function given by equation (3.9) is no longer symmetric with respect to the peak located at , the width of the output function is given , where x1 and x2 are the solutions of the half-width equation out(x) = 1/2out(1). We found (figure 5d) that the width of the output function σout with an adjusted R 2 = 0.9999 over the entire range σf ∈ [0, 1]. A reasonable approximation for standard deviation of the frequency noise σf < 0.5 is linear σout = (0.019 ± 0.003) + (2.20 ± 0.01)σf with an adjusted R 2 = 0.999. As a result, in the presence of frequency variability during probe trials, we predict theoretically that the SBF model (i) produces a Gaussian-like output function with a long tail, (ii) produces accurate interval timing (the output function is centred on ) and (iii) obeys scalar property with . We also noted that the peak time predicted for an arbitrary pdff, i.e. t0f = T/(1 + γf) is identical with the peak time in the particular case of Gaussian noise, i.e. if for .

Figure 5. Frequency noise produces a skewed Gaussian-like output with a long tail—theoretical predictions. (a) Theoretically predicted output function for Gaussian noise affecting the oscillators' frequencies is a skewed Gaussian-like output. The normalized output function is plotted against the normalized time t/t0f, where t0f is the time marker for the peak of the output function. The skewness is measured by the two corresponding half-widths τ1 and τ2. (b) Although both half-widths increase with the standard deviation of the frequency noise, the long tail of the output is determined by the very fast increase of τ2. (c) A quantitative measure of the skewness is the ratio τ2/τ1, which increases faster than linearly with σf. (d) The width of the output function σout also increases faster than linearly with σf. (Online version in colour.)

(g) Particular case: finite frequency range and time-scale invariance in the presence of Gaussian noise affecting oscillators’ frequencies during probe trials

For a finite range of frequencies (fmin < f < fmax) with a very large number of FC oscillators Nin, a more realistic estimation of the output function from equation (3.7) is (see electronic supplementary material, section H for detailed calculations):

A significant difference between equation (3.9), which is valid in the limit of an infinite frequency range of the FC oscillators, and equation (3.10), which takes into consideration that there is always only a finite frequency range of the FC oscillators, is the frequency-dependent factor in the output function represented by the difference of the two Erf() functions. The output function in equation (3.10) resembles a Gaussian with a long tail and obeys time-scalar invariance property.

We used both the SBF-sin and SBF-ML implementations to numerically verify that (i) the output function resembles a Gaussian with a long tail (figure 6a), and (ii) the width of the output function increases linearly with the criterion time (figure 6b). The output functions of the SBF-ML implementation (see thin continuous lines in figure 6a) for T = 10 s and T = 30 s are reasonably fitted by Gaussian curves (see thick continuous line for T = 10 s and dashed line for T = 30 s in figure 6a). However, as predicted theoretically, the output has a long tail. The scalar property is indeed valid, because the width of the output function linearly increase with the criterion time (figure 6b).

Figure 6. Frequency noise produces a skewed Gaussian-like output with a long tail in numerical results. (a) The output function (thin continuous lines) of the SBF-ML model for T = 10 and T = 30 s has a Gaussian shape and its peak can be reasonable localized by a Gaussian fit (thick continuous line for T = 10 and dashed line for T = 30 s). The effect of frequency noise is the asymmetric output function that has a long tail. (b) The width of the output function increases linearly with the criterion time and obey the time-scale invariance property. (Online version in colour.)

Furthermore, we checked that the scalar property holds not only for Gaussian noise, which allowed us to determine an analytic expression for the long-tailed output function in §4f, but also for uniform and Poisson noise.

4. Discussion

Computational models of interval timing vary largely with respect to the hypothesized mechanisms and the assumptions by which temporal processing is explained, and by which time-scale invariance or drug effects are explained. The putative mechanisms of timing rely on pacemaker/accumulator processes [5,6,89,90], sequences of behaviours [20], pure sine oscillators [8,34,91,92], memory traces [21,93–97] or cell and network-level models [27,98]. For example, both neurometric functions from single neurons and ensembles of neurons successfully paralleled the psychometric functions for the to-be-timed intervals shorter than 1 s [27]. Reutimann et al. [99] also considered interacting populations that are subject to neuronal adaptation and synaptic plasticity based on the general principle of firing rate modulation in a single cell. Balancing long-term potentiation (LTP) and long-term depression (LTD) mechanisms are thought to modulate the firing rate of neural populations with the net effect that the adaptation leads to a linear decay of the firing rate over time. Therefore, the linear relationship between time and the number of clock ticks of the pacemaker–accumulator model in SET [5] was translated into a linearly decaying firing rate model that maps time and variable firing rate.

By and large, to address time-scale invariance, current behavioural theories assume convenient computations, rules or coding schemes. Scalar timing is explained as either deriving from computation of ratios of durations [5,6,100], adaptation of the speed at which perceived time flows [20] or from processes and distributions that conveniently scale up in time [21,91,93,95,96]. Some neurobiological models share computational assumptions with behavioural models and continue to address time-scale invariance by specific computations or embedded linear relationships [101]. Some assume that timing involves neural integrators capable of linearly ramping up their firing rate in time [98], whereas others assume LTP/LTD processes whose balance leads to a linear decay of the firing rate in time [99]. It is unclear whether such models can account for time-scale invariance in a large range of behavioural or neurophysiological manipulations.

Neurons are often viewed as communications channels that respond even to the precisely delivered stimuli sequence in a random manner consistent with Gaussian noise [102]. Biological noise was shown to play important functional roles, e.g. enhance signal detection through stochastic resonance [103,104] and stabilize synchrony [105,106]. Firing rate variability in neural oscillators also results from ongoing cortical activity (see [106,107] and references therein), which may appears noisy simply because it is not synchronized with obvious stimuli.

A possible common ground for all interval timing models could be the threshold accommodation phenomenon that allows stimulus selectivity [108,109] and promotes coincidence detection [11]. Farries [110] showed that dynamic threshold change in subthalamic nucleus (STn) that projects to the output nuclei of the BG allows STn to act either as an integrator for rate code inputs or a coincidence detector [110] (figure 2). Interestingly, under both conditions, faulty (noisy) processing explains time-scale invariance. For example, Killeen & Taylor [28] explained scale invariance of counting in terms of noisy information transfer between counters. Similarly, here, we explained time-scale invariance of timing in terms of noisy coincidence detection during timing. Therefore, it seems that when BG acts either as a counter or as coincidence detector, neural noise alone can explain time-scale invariance.

Our theoretical predictions based on an SBF model show that time-scale invariance emerges as the property of a (very) large and noisy network. Furthermore, we showed that the output function of an SBF mode always resembles the Gaussian shape found in behavioural experiments, regardless of the type of noise affecting the timing network. We showed analytically that in the presence of arbitrary criterion variability alone the SBF model produces an output that (i) has a symmetric and Gaussian shape, (ii) is accurate, i.e. the peak of the output is located at t0T = T(1 + γT), where is a constant that depends on the type of memory noise and (iii) has a width that increases linearly with the criterion time, i.e. obeys time-scale invariance property. The memory variability is ascribed to storing or retrieving the representation of criterion time to and from the long-term memory (figure 2b). We also showed analytically and verified numerically that for a Gaussian noise affecting the memory of the criterion time the output function of SBF-sin model is analytic and its peak is at t0T = T, which means that for Gaussian noise γt = 0 (figure 4a). All of the above properties were also verified by replacing phase oscillators with biophysically realistic ML model neurons (figure 4b,c).

We also showed analytically that, in the presence of arbitrary frequency variability alone, the SBF model produces an output that (i) has a Gaussian-like shape (based on the central limit theorem, (ii) is accurate, i.e. the peak of the output is located at t0f = T/(1 + γf), where is a constant that depends on the type of frequency noise and (iii) has a width σout = T(1+γT)σf that increases linearly with the criterion time, i.e. obeys time-scale invariance property. In the presence of Gaussian noise, the output function is analytic, asymmetric and Gaussian-like (figure 5a) with a skewness that increases quadratically with the standard deviation of the frequency noise (figure 5b). In addition to the fact that the standard deviation of the output function is proportional to the criterion time and, therefore, obeys the time-scale invariance property, it also increases quadratically with the standard deviation of the frequency noise (figure 5d). For Gaussian noise, the peak of the asymmetric, long-tailed Gaussian-like output (figure 5a) resembles experimental data that show a strong long tail in subjects' responses (figure 1c).

Our results regarding the effect of noise on interval timing support and extend the speculation [34] by which an SBF model requires at least one source of variance (noise) to address time-scale invariance. Rather than being a signature of higher-order cognitive processes or specific neural computations related to timing, time-scale invariance naturally emerges in a massively connected brain from the intrinsic noise of neurons and circuits [4,27]. This provides the simplest explanation for the ubiquity of scale invariance of interval timing in a large range of behavioural, lesion and pharmacological manipulations.


Neurometric amplitude-modulation detection threshold in the guinea-pig ventral cochlear nucleus

Amplitude modulation (AM) is a key information-carrying feature of natural sounds. The majority of physiological data on AM representation are in response to 100%-modulated signals, whereas psychoacoustic studies usually operate around detection threshold (∼5% AM). Natural sounds are characterised by low modulation depths (<<100% AM).

Recording from ventral cochlear nucleus neurons, we examine the temporal representation of AM tones as a function of modulation depth. At this locus there are several physiologically distinct neuron types which either preserve or transform temporal information present in their auditory nerve fibre inputs.

Modulation transfer function bandwidth increases with increasing modulation depth.

Best modulation frequency is independent of modulation depth.

Neural AM detection threshold varies with unit type, modulation frequency, and sound level. Chopper units have better AM detection thresholds than primary-like units. The most sensitive chopper units have thresholds around 3% AM, similar to human psychophysical performance.

Abstract Amplitude modulation (AM) is a pervasive feature of natural sounds. Neural detection and processing of modulation cues is behaviourally important across species. Although most ecologically relevant sounds are not fully modulated, physiological studies have usually concentrated on fully modulated (100% modulation depth) signals. Psychoacoustic experiments mainly operate at low modulation depths, around detection threshold (∼5% AM). We presented sinusoidal amplitude-modulated tones, systematically varying modulation depth between zero and 100%, at a range of modulation frequencies, to anaesthetised guinea-pigs while recording spikes from neurons in the ventral cochlear nucleus (VCN). The cochlear nucleus is the site of the first synapse in the central auditory system. At this locus significant signal processing occurs with respect to representation of AM signals. Spike trains were analysed in terms of the vector strength of spike synchrony to the amplitude envelope. Neurons showed either low-pass or band-pass temporal modulation transfer functions, with the proportion of band-pass responses increasing with increasing sound level. The proportion of units showing a band-pass response varies with unit type: sustained chopper (CS) > transient chopper (CT) > primary-like (PL). Spike synchrony increased with increasing modulation depth. At the lowest modulation depth (6%), significant spike synchrony was only observed near to the unit's best modulation frequency for all unit types tested. Modulation tuning therefore became sharper with decreasing modulation depth. AM detection threshold was calculated for each individual unit as a function of modulation frequency. Chopper units have significantly better AM detection thresholds than do primary-like units. AM detection threshold is significantly worse at 40 dB vs. 10 dB above pure-tone spike rate threshold. Mean modulation detection thresholds for sounds 10 dB above pure-tone spike rate threshold at best modulation frequency are (95% CI) 11.6% (10.0–13.1) for PL units, 9.8% (8.2–11.5) for CT units, and 10.8% (8.4–13.2) for CS units. The most sensitive guinea-pig VCN single unit AM detection thresholds are similar to human psychophysical performance (∼3% AM), while the mean neurometric thresholds approach whole animal behavioural performance (∼10% AM).


Relation of frontal eye field activity to saccade initiation during a countermanding task

The countermanding (or stop signal) task probes the control of the initiation of a movement by measuring subjects' ability to withhold a movement in various degrees of preparation in response to an infrequent stop signal. Previous research found that saccades are initiated when the activity of movement-related neurons reaches a threshold, and saccades are withheld if the growth of activity is interrupted. To extend and evaluate this relationship of frontal eye field (FEF) activity to saccade initiation, two new analyses were performed. First, we fit a neurometric function that describes the proportion of trials with a stop signal in which neural activity exceeded a criterion discharge rate as a function of stop signal delay, to the inhibition function that describes the probability of producing a saccade as a function of stop signal delay. The activity of movement-related but not visual neurons provided the best correspondence between neurometric and inhibition functions. Second, we determined the criterion discharge rate that optimally discriminated between the distributions of discharge rates measured on trials when saccades were produced or withheld. Differential activity of movement-related but not visual neurons could distinguish whether a saccade occurred. The threshold discharge rates determined for individual neurons through these two methods agreed. To investigate how reliably movement-related activity predicted movement initiation the analyses were carried out with samples of activity from increasing numbers of trials from the same or from different neurons. The reliability of both measures of initiation threshold improved with number of trials and neurons to an asymptote of between 10 and 20 movement-related neurons. Combining the activity of visual neurons did not improve the reliability of predicting saccade initiation. These results demonstrate how the activity of a population of movement-related but not visual neurons in the FEF contributes to the control of saccade initiation. The results also validate these analytical procedures for identifying signals that control saccade initiation in other brain structures.

Figures

Investigating the neural basis of…

Investigating the neural basis of saccade preparation with the countermanding task. A .…

Countermanding task. At the beginning…

Countermanding task. At the beginning of each trial, monkeys fixate a central spot…

Neurometric threshold. A . Saccades…

Neurometric threshold. A . Saccades are more likely to be canceled if the…

Optimal discriminant threshold. Theoretical distributions…

Optimal discriminant threshold. Theoretical distributions of maximum activity for canceled (thick solid) and…

Activity of a typical movement-related…

Activity of a typical movement-related neuron. A . Activity in all trials with…

Fitting neurometric and psychometric functions…

Fitting neurometric and psychometric functions for neuron shown in Figure 5. A .…

Computing optimal discriminant threshold for…

Computing optimal discriminant threshold for neuron shown in Figure 5. A . Probability…

Comparison of neurometric and discriminant…

Comparison of neurometric and discriminant thresholds for the sample of FEF neurons with…

Time of threshold crossing in…

Time of threshold crossing in single trials. A . Distribution of the time…

Effect of pooling trials on…

Effect of pooling trials on accuracy of accounting for saccade initiation A .…

Effect of pooling trials across…

Effect of pooling trials across neurons on accuracy of accounting for saccade initiation.…

Time of threshold crossing of…

Time of threshold crossing of pooled activity. A . Time that pooled spike…

Lack of relation of activity…

Lack of relation of activity of visual neurons in FEF to movement initiation.…


Relation of frontal eye field activity to saccade initiation during a countermanding task

The countermanding (or stop signal) task probes the control of the initiation of a movement by measuring subjects' ability to withhold a movement in various degrees of preparation in response to an infrequent stop signal. Previous research found that saccades are initiated when the activity of movement-related neurons reaches a threshold, and saccades are withheld if the growth of activity is interrupted. To extend and evaluate this relationship of frontal eye field (FEF) activity to saccade initiation, two new analyses were performed. First, we fit a neurometric function that describes the proportion of trials with a stop signal in which neural activity exceeded a criterion discharge rate as a function of stop signal delay, to the inhibition function that describes the probability of producing a saccade as a function of stop signal delay. The activity of movement-related but not visual neurons provided the best correspondence between neurometric and inhibition functions. Second, we determined the criterion discharge rate that optimally discriminated between the distributions of discharge rates measured on trials when saccades were produced or withheld. Differential activity of movement-related but not visual neurons could distinguish whether a saccade occurred. The threshold discharge rates determined for individual neurons through these two methods agreed. To investigate how reliably movement-related activity predicted movement initiation the analyses were carried out with samples of activity from increasing numbers of trials from the same or from different neurons. The reliability of both measures of initiation threshold improved with number of trials and neurons to an asymptote of between 10 and 20 movement-related neurons. Combining the activity of visual neurons did not improve the reliability of predicting saccade initiation. These results demonstrate how the activity of a population of movement-related but not visual neurons in the FEF contributes to the control of saccade initiation. The results also validate these analytical procedures for identifying signals that control saccade initiation in other brain structures.

Figures

Investigating the neural basis of…

Investigating the neural basis of saccade preparation with the countermanding task. A .…

Countermanding task. At the beginning…

Countermanding task. At the beginning of each trial, monkeys fixate a central spot…

Neurometric threshold. A . Saccades…

Neurometric threshold. A . Saccades are more likely to be canceled if the…

Optimal discriminant threshold. Theoretical distributions…

Optimal discriminant threshold. Theoretical distributions of maximum activity for canceled (thick solid) and…

Activity of a typical movement-related…

Activity of a typical movement-related neuron. A . Activity in all trials with…

Fitting neurometric and psychometric functions…

Fitting neurometric and psychometric functions for neuron shown in Figure 5. A .…

Computing optimal discriminant threshold for…

Computing optimal discriminant threshold for neuron shown in Figure 5. A . Probability…

Comparison of neurometric and discriminant…

Comparison of neurometric and discriminant thresholds for the sample of FEF neurons with…

Time of threshold crossing in…

Time of threshold crossing in single trials. A . Distribution of the time…

Effect of pooling trials on…

Effect of pooling trials on accuracy of accounting for saccade initiation A .…

Effect of pooling trials across…

Effect of pooling trials across neurons on accuracy of accounting for saccade initiation.…

Time of threshold crossing of…

Time of threshold crossing of pooled activity. A . Time that pooled spike…

Lack of relation of activity…

Lack of relation of activity of visual neurons in FEF to movement initiation.…


Frequently Asked Questions

Currently, LINS neuropsychologists participate with Medicare, Workers' Compensation, and No Fault insurance. In addition, if a plan allows for out of network services, LINS may bill your insurance company for the cost of the evaluation. Prior to onset of services, our office manager will determine any patient financial obligation. Many patients choose to pay for the evaluation and to submit on their own to the insurance company for reimbursement. Our billing staff have over ten years of experience providing patients with the necessary paper work so they will be reimbursed.

Insurance Plans Accepted:

Will my insurance company pay for the evaluation?

Neuropsychological evaluations are often covered when there is a history of a medical or neurological condition (e.g. brain injury, seizure, loss of consciousness). Further, many insurance companies will cover some or all of an evaluation that is performed to help physicians to better understand cognitive change (e.g. the reason for decline in memory, attention, or problem solving skills). Prior to onset of services, our office manager will contact your insurance company and provide you with information regarding your plan's coverage.

What is a neuropsychological examination?

The Neuropsychological examination is one of the methods of diagnosing neurodevelopmental, neurodegenerative and acquired disorders of brain function. It is frequently a part of the overall neurodiagnostic assessment which includes other neurometric techniques such as CT, MRI, EEG, SPECT. The purpose of the neuropsychological examination is to assess the clinical relationship between the brain/central nervous system and behavioral dysfunction. It is a neurodiagnostic, consultative service and NOT a mental health/psychological evaluation or psychiatric treatment service.

The Social Security Administration defines neuropsychological testing as the &ldquoadministration of standardized tests that are reliable and valid with respect to assessing impairment in brain functioning.&rdquo The examination is performed by a qualified neuropsychologist who has undergone specialized education and intensive training in the clinical and neuroanatomy, neurology, and neurophysiology. The neuropsychologist works closely with the primary or consultant physician in assessing patient cerebral status. Neuropsychological services are designated as &ldquomedicine, diagnostic&rdquo by the federal Health Care and Financing Administration (HCFA), are subsumed under &ldquoCentral Nervous System Assessments&rdquo in the 1996 CPT Code Book, and have corresponding ICD diagnoses.

Neuropsychological examinations are clinically indicated and medically necessary when patients display signs and symptoms in intellectual compromise, cognitive and/or neurobehavioral dysfunction that involve, but are not restricted to, memory deficits, language disorders, impairment of organization and planning, difficulty with cognition, and perceptual abnormalities. Frequent etiologies include: head trauma, stroke, tumor, infectious disease, toxic exposure metabolic abnormalities, autoimmune disease, genetic defects, learning disabilities, and neurodegenerative disease. The examination entails the taking of and extensive history (including review of medical records) and the administration of a comprehensive battery of tests that can take many hours and requires intensive data analysis. Consultation with other medical professionals in common, such as neurologists, neurosurgeons, and radiologists. The sensitivity of neuropsychological tests is such that they often reveal abnormality in the absence of positive findings on CT and MR scan.

What is a neuropsychologist?

A clinical neuropsychologist is a professional within the field of psychology with special expertise in the applied science of brain-behavior relationships. Clinical neuropsychologists use this knowledge in the assessment, diagnosis, treatment, and/or rehabilitation of patients across the lifespan with neurological, medical, neurodevelopmental and psychiatric conditions, as well as other cognitive and learning disorders. The clinical neuropsychologist uses psychological, neurological, cognitive, behavioral, and physiological principles, techniques and tests to evaluate patients&rsquo neurocognitive, behavioral, and emotional strengths and weaknesses and their relationship to normal and abnormal central nervous system functioning. The clinical neuropsychologist uses this information and information provided by other medical/healthcare providers to identify and diagnose neurobehavioral disorders, and plan and implement intervention strategies. The specialty of clinical neuropsychology is recognized by the American Psychological Association and the Canadian Psychological Association. Clinical neuropsychologists are independent practitioners (healthcare providers) of clinical neuropsychology and psychology. The clinical neuropsychologist (minimal criteria) has: 1. A doctoral degree in psychology from an accredited university training program. 2. An internship, or its equivalent, in a clinically relevant area of professional psychology. 3. The equivalent of two (fulltime) years of experience and specialized training, at least one of which is at the post-doctoral level, in the study and practice of clinical neuropsychology and related neurosciences. These two years include supervision by a clinical neuropsychologist1 . 4. A license in his or her state or province to practice psychology and/or clinical neuropsychology independently, or is employed as a neuropsychologist by an exempt agency.

*The above definition is provided by the National Academy of Neuropsychology. Additional information can be found on the NAN website.

Why have I been referred?

Neuropsychological evaluations are requested specifically to help your doctors, teachers, school psychologist, or other professionals understand how the different areas and systems of the brain are working. Testing is usually recommended when there are symptoms or complaints involving memory or thinking. This can be signaled by a change in concentration, organization, reasoning, memory, language, perception, coordination, or personality. The changes may be due to any of a number of medical, neurological, psychological, or genetic causes.


Contents

All neuroimaging is considered part of brain mapping. Brain mapping can be conceived as a higher form of neuroimaging, producing brain images supplemented by the result of additional (imaging or non-imaging) data processing or analysis, such as maps projecting (measures of) behavior onto brain regions (see fMRI). One such map, called a connectogram, depicts cortical regions around a circle, organized by lobes. Concentric circles within the ring represent various common neurological measurements, such as cortical thickness or curvature. In the center of the circles, lines representing white matter fibers illustrate the connections between cortical regions, weighted by fractional anisotropy and strength of connection. [1] At higher resolutions brain maps are called connectomes. These maps incorporate individual neural connections in the brain and are often presented as wiring diagrams. [2]

Brain mapping techniques are constantly evolving, and rely on the development and refinement of image acquisition, representation, analysis, visualization and interpretation techniques. [3] Functional and structural neuroimaging are at the core of the mapping aspect of brain mapping.

Some scientists have criticized the brain image-based claims made in scientific journals and the popular press, like the discovery of "the part of the brain responsible" things like love or musical abilities or a specific memory. Many mapping techniques have a relatively low resolution, including hundreds of thousands of neurons in a single voxel. Many functions also involve multiple parts of the brain, meaning that this type of claim is probably both unverifiable with the equipment used, and generally based on an incorrect assumption about how brain functions are divided. It may be that most brain functions will only be described correctly after being measured with much more fine-grained measurements that look not at large regions but instead at a very large number of tiny individual brain circuits. Many of these studies also have technical problems like small sample size or poor equipment calibration which means they cannot be reproduced - considerations which are sometimes ignored to produce a sensational journal article or news headline. In some cases the brain mapping techniques are used for commercial purposes, lie detection, or medical diagnosis in ways which have not been scientifically validated. [4] [ page needed ]

In the late 1980s in the United States, the Institute of Medicine of the National Academy of Science was commissioned to establish a panel to investigate the value of integrating neuroscientific information across a variety of techniques. [5] [ page needed ]

Of specific interest is using structural and functional magnetic resonance imaging (fMRI), diffusion MRI (dMRI), magnetoencephalography (MEG), electroencephalography (EEG), positron emission tomography (PET), Near-infrared spectroscopy (NIRS) and other non-invasive scanning techniques to map anatomy, physiology, perfusion, function and phenotypes of the human brain. Both healthy and diseased brains may be mapped to study memory, learning, aging, and drug effects in various populations such as people with schizophrenia, autism, and clinical depression. This led to the establishment of the Human Brain Project. [6] [ page needed ] It may also be crucial to understanding traumatic brain injuries (as in the case of Phineas Gage) [7] and improving brain injury treatment. [8] [9]

Following a series of meetings, the International Consortium for Brain Mapping (ICBM) evolved. [10] [ page needed ] The ultimate goal is to develop flexible computational brain atlases.

The interactive and citizen science website Eyewire maps mices' retinal cells and was launched in 2012. In 2021, the most comprehensive 3D map of the human brain was published by an U.S. IT company. It shows neurons and their connections along with blood vessels and other components of a millionth of a brain. For the map, the 1 mm³ sized fragment was sliced into over 5 000 nanometers-thin pieces which were scanned with an electron microscope. The interactive map required 1.4 petabytes of storage-space. [12] [13]

Brain mapping is the study of the anatomy and function of the brain and spinal cord through the use of imaging (including intra-operative, microscopic, endoscopic and multi-modality imaging), immunohistochemistry, molecular & optogenetics, stem cell and cellular biology, engineering (material, electrical and biomedical), neurophysiology and nanotechnology.


Discussion

In the present study, we revisited optimal population coding using Bayesian ideal observer analysis in both the reconstruction and the discrimination paradigm. Both lead to very similar conclusions with regard to the optimal tuning width (Fig. 2 B and C) and the optimal noise correlation structure (Fig. 4 C and D). Importantly, the signal-to-noise ratio—which is critically limited by the available decoding time—plays a crucial role for the relative performance of different coding schemes: Population codes well suited for long intervals may be severely suboptimal for short ones. In contrast, Fisher information is largely ignorant of the limitations imposed by the available decoding time—codes that are favorable for long integration intervals seem favorable for short ones as well.

Whereas Fisher information yields an accurate approximation of the ideal observer performance in the limit of long decoding time windows, this is not necessarily true in the limit of large populations. We showed analytically that the ideal observer error for a population with Fisher-optimal tuning functions does not decay to zero in the limit of a large number of neurons but saturates at a value determined solely by the available decoding time (Fig. 3C). In contrast, Fisher information predicts that the error scales like the inverse of the population size, independent of time (Fig. 3B). Thus, the “folk theorem” that Fisher information provides an accurate assessment of coding quality in the limit of large population size is correct only if the width of the tuning functions is not optimized as the population grows.

In the discrimination task, we explained this behavior by showing that the coarse discrimination error is independent of the population size for ensembles with Fisher-optimal tuning curves. In the reconstruction task, large estimation errors play a similar role to the coarse discrimination error. The convergence of the reconstruction error to a normal distribution with variance equal to the inverse Fisher information relies on a linear approximation of the derivative of the log-likelihood (19). If the tuning function width scales with population size—as it does if the tuning functions are optimized for Fisher information—the quality of this linear approximation does not improve with increasing population size because the curvature of the tuning functions is directly coupled to the tuning width. As a consequence, the Cramér–Rao bound from Eq. 1 is not tight even asymptotically leading to the observed discrepancies between Fisher information and the MMSE.

Similarly, Fisher information also fails to evaluate the ideal observer performance for different noise correlation structures correctly when the time available for decoding is short. The reason is that the link between Fisher information and the optimal reconstruction or discrimination error also relies on the central limit theorem (4, 18, 19). Therefore, in the presence of noise correlations, the approximation of the ideal observer error obtained from Fisher information can converge very slowly or not at all to the true error for increasing population size, because the observations gathered from different neurons are no longer independent. In fact, our results show that it is crucial not to rely on the asymptotic approach of Fisher information alone to determine the relative quality of different correlation structures.

In contrast to our study, earlier studies using the discrimination framework mostly measured the minimal linear discrimination error or computed the fine discrimination error only (4, 13, 27–29). Two other studies used upper bounds on the MDE, which are tighter than the minimal linear discrimination error (13, 30), but no study has so far computed the exact MDE for the full range of the neurometric function. For a detailed discussion of these studies see SI Discussion. Information theoretic approaches provide a third framework for evaluating neural population codes in addition to the reconstruction and discrimination framework studied here. For example, stimulus-specific information (SSI) has been used to assess the role of the noise level for population coding in small populations (31) and in the asymptotic regime, SSI and Fisher information seem to yield qualitatively similar results (32). In contrast to neurometric function analysis, information theoretic approaches are not directly linked to a behavioral task.

In conclusion, neurometric function analysis offers a tractable and intuitive framework for the analysis of neural population coding with an exact ideal observer model. It is particularly well suited for a comparison of theoretical assessment of different population codes with results from psychophysical or neurophysiological measurements, as the two-alternative forced choice orientation discrimination task is much studied in many neurophysiological and psychophysical investigations in humans and monkeys (33, 34). In contrast to Fisher information, neurometric functions are informative not only about fine, but also about coarse discrimination performance. For example, two codes with the same Fisher information may even yield different neurometric functions (Fig. S6). Our results suggest that the validity of the conclusions based on Fisher information depends on the coding scenario being investigated: If the parameter of interest induces changes that either impair or improve both fine and coarse discrimination performance (e.g., when studying the effect of population size for fixed, wide tuning functions), Fisher information is a valuable tool for assessing different coding schemes. If, however, fine discrimination performance can be improved at the cost of coarse discrimination performance (as is the case with tuning width), optimization of Fisher information will impair the average performance of the population codes. In this case, quite different population codes are optimal than those inferred from Fisher information.


What is a neurometric function? - Psychology

Associate Professor
Department of Psychology

Faculty Affiliate
Neuroscience
Applied Science

The psychophysiology of attention & cognitive control and assessment of cognitive deficits in clinical populations.

Dickter, C. & Kieffaber, P. D., (2013). EEG Methods in Psychological Science. Sage Publications, London.

Kieffaber, P.D., Hershaw, J., Sredl, J., West, R. (In Press) Electrophysiological Correlates of Error Initiation and Response Correction . Neuroimage.

Kieffaber, P.D., Cunningham, E., & Hershaw, J., Okhravi, H. R. (In Press) A brief neurometric battery for the assessment of age-related changes in cognitive function. Clinical Neurophysiology.

Brenner, C., Rumak, S. P., Burns, A., & Kieffaber, P. D. (2014) The Role of Encoding and Attention in Facial Emotion Memory: An EEG Investigation, International Journal of Psychophysiology, 93, 398-410.

* Oleynick, V. C., Thrash, T. M., LeFew, M. C., Moldovan, E. G., Kieffaber, P. D. (2014) The scientific study of inspiration in the creative process: Challenges and opportunities, Frontiers in Human Neuroscience, 8(436), 1-8.

West, R., Tiernan, B. N., Kieffaber, P. D., Bailey, K., & Anderson, S. (2014) The effects of age on the neural correlates of feedback processing in a naturalistic gambling game. Psychophysiology, 51(8), 734-745.

West, R., Bailey, K., Anderson, S. Kieffaber, P. (2014) Beyond the FRN: A Spatio-temporal Analysis of the Neural Correlates of Feedback Processing in a Virtual Blackjack Game. Brain and Cognition, 86, 104-115.

Dickter, C. L., Kieffaber, P. D., Kittel, J. A. & Forestell, C. A. (2013) Mu Suppression as an Indicator of Activation of the Perceptual-Motor System by Smoking-related Cues in Smokers, Psychophysiology, 50(7), 664-70

* Lindbergh, C. A., Kieffaber,P. D. (2013) The Neural Correlates of Temporal Judgments in the Duration Bisection Task. Neuropsychologia, 51(2), 191-196.

* Gayle, L. C., Gal, D., & Kieffaber, P. D. (2012) Measuring affective reactivity in individuals with autism spectrum personality traits using the visual mismatch negativity event-related brain potential. Frontiers in Human Neuroscience, 6(334), 1-7.

Kieffaber,P. D., Kruschke, J. K., Walker,P. M., Cho, R. Y., & Hetrick, W. P. (2012) The contributions of stimulus- and response-set to control and conflict in task-set switching. Journal of Experimental Psychology: Human Perception and Performance. Advance online publication. doi: 10.1037/a0029545

Kieffaber, P. D., Cho, R. Y. (2010) Induced cortical gamma-band oscillations reflect cognitive control elicited by implicit probability cues in the preparing to overcome prepotency (POP) task. Cognitive Affective & Behavioral Neuroscience, 10, 431-440.

Brenner, C.A., Kieffaber, P. D., Johannesen, J.K., O&rsquoDonnell, B. F., Hetrick, W. P. (2009) Event-related potential abnormalities in Schizophrenia: A failure to &rsquogate-in&rsquo salient information?, Schizophrenia Research, 113 (2-3),332-338.

* Paytner, C., Kieffaber, P. D., Reder, L.M. (2009) Knowing we know before we know: ERP correlates of initial feeling-of-knowing. Neuropsychologia, 47 (3), 796-803.

Reder, L. M., Park, H., & Kieffaber, P. D. (2009) Memory systems do not divide on consciousness: Reinterpreting memory in terms of activation and binding. Psychological Bulletin, 135 (1), 23-49.

Carroll, C., Kieffaber, P. D., Vohs, J.L., O&rsquoDonnell, B. F., Shekhar, A., Hetrick, W. P. (2008) Contributions of Spectral Frequency Analyses to the Study of P50 ERP Amplitude and Suppression in Bipolar Disorder With or Without a History of Psychosis. Bipolar Disorders, 10 (7), 776-787

Vohs, J.L., Hetrick, W. P., Kieffaber, P. D., Bodkins, M., Bismark, A., Shekhar, A., & O&rsquoDonnell, B. F. (2008), Visual event-related potentials in schizotypal personality disorder and schizophrenia. Journal of Abnormal Psychology, 117 (1), 119-131

Kieffaber, P. D., Marcoulides, G. A., White, M., & Harrington, D. E., (2007) Modeling the ecological validity of neurocognitive assessment in adults with acquired brain injury. Journal of Clinical Psychology in Medical Settings, 14 (3), 206-218

Kieffaber, P. D., O&rsquoDonnell, B. F., Shekhar, A., & Hetrick, W. P., (2007) Event-related brain potential evidence for preserved attentional set switching in schizophrenia. Schizophrenia Research, 93, 355-365.

Kieffaber, P. D., Kappenman, E., O&rsquoDonnell, B. F., Shekhar, A., Bodkins, M., & Hetrick, W. P. (2006) Shifting and maintenance of task set in schizophrenia. Schizophrenia Research, 84 (2-3), 345-358.

Kieffaber, P. D. & Hetrick, W. P. (2005). Event-related Potential Correlates of Task-switching and Switch Costs. Psychophysiology, 42, 56-71.

Johannesen, J. K., Kieffaber, P. D., O&rsquoDonnell, B. F., Shekhar, A., Evans, J. D., & Hetrick,, W. P. (2005). Contributions of subtype and spectral frequency analysis to the study of P50 ERP amplitude and suppression in schizophrenia. Schizophrenia Research, 78, 269-284.

Brown, S. M., Kieffaber, P. D, Vohs, J. L., Carroll, C. A., Tracy, J. A., Shekhar, A., O&rsquoDonnell, B. F., Steinmetz, J. E., & Hetrick, W. P. (2005). Eye-blink conditioning deficits indicate timing and cerebellar abnormalities in schizophrenia. Brain and Cognition, 58, 94-108.

Zirnheld, P. J., Carroll, C. A., Kieffaber, P. D., O&rsquoDonnell, B. F., Shekhar, S., & Hetrick, W. P. (2004). Haloperidol Impairs Learning and Error-Related Negativity (ERN) in Humans. Journal of Cognitive Neuroscience, 16 (6), 1098-1112.


Objective

Event-related potentials (ERPs) show promise as markers of neurocognitive dysfunction, but conventional recording procedures render measurement of many ERP-based neurometrics clinically impractical. The purpose of this work was (a) to develop a brief neurometric battery capable of eliciting a broad profile of ERPs in a single, clinically practical recording session, and (b) to evaluate the sensitivity of this neurometric profile to age-related changes in brain function.

Methods

Nested auditory stimuli were interleaved with visual stimuli to create a 20-min battery designed to elicit at least eight ERP components representing multiple sensory, perceptual, and cognitive processes (Frequency & Gap MMN, P50, P3, vMMN, C1, N2pc, and ERN). Data were recorded from 21 younger and 21 high-functioning older adults.

Results

Significant multivariate differences were observed between ERP profiles of younger and older adults. Metrics derived from ERP profiles could be used to classify individuals into age groups with a jackknifed classification accuracy of 78.6%.

Conclusions

Results support the utility of this design for neurometric profiling in clinical settings.

Significance

This study demonstrates a method for measuring a broad profile of ERP-based neurometrics in a single, brief recording session. These markers may be used individually or in combination to characterize/classify patterns of sensory and/or perceptual brain function in clinical populations.


Labs and Research Topics

The department's research, in which students can take part in their practical study components, spans cutting edge topics such as multisensory integration, brain oscillations and behaviour, cortical plasticity, computational neuroscience, and pharmaco-neuroimaging among others. A variety of modern neuroscience tools and psychology labs are available to gain 'hands-on' experience in magnetic resonance imaging, magnetoencephalography, high-density electroencephalography, eye-tracking, transcranial magnetic and electric stimulation, and psychophysics. The Department of Psychology in Oldenburg is among the best-equipped in the country and characterised by its unique decision to dedicate its resources to this Master's course.

Applied neurocognitive psychology lab (Prof. Dr. Jochem Rieger)

We investigate neural processes in the sensation-perception-action-cycle in an interdisciplinary team. Central to our research are cutting edge brain decoding methods which we use to learn from invasive and non-invasive neuroimaging methods in humans how the brain accomplishes everyday tasks. The aim of our research is twofold. On the one hand we are interested in basic research questions on how the brain constructs percepts from environmental sensory data, represents percepts, makes decisions, and controls muscles to interact with the environment. On the other hand we are interested to apply our research to construct brain- machine interfaces to supplement human cognition, communication, and motor function. Examples for our work on decoding of cognitive states and our brain controlled grasping project can be found on the lab-webpage.

Biological psychology lab (Prof. Dr. Christiane Thiel)

Our research focuses on visuospatial and auditory attention, learning and plasticity, as well as the pharmacological modulation of such processes. The combination of pharmacological challenges with cognitive tasks in the context of functional neuroimaging (fMRI) studies is a powerful approach to directly assess pharmacological modulation of human brain activity. For example, we have performed several pharmacological fMRI studies showing a nicotinic modulation of visuospatial attention and shown that nicotine increases brain network efficiency. A long-term goal of such studies is to provide an experimental approach that has relevance to studying mechanisms of recovery and treatment effects in different patient populations.

Experimental psychology lab (Prof. Dr. Christoph Herrmann)

The lab is headed by Christoph Herrmann and focuses on physiological correlates of cognitive functions such as attention, memory and perception. The methods that are used comprise electroencephalography (EEG), magnetoencephalography (MEG), functional magnetic resonance imaging (fMRI), transcranial electric stimulation (TES), transcranial magnetic stimulation (TMS), eye-tracking, neural network simulations, and psychophysics. The focus of the research lies in the analysis of oscillatory brain mechanisms. Oscillatory brain activity is considered to be one of the electrophysiological correlates of cognitive functions. We analyse these brain oscillations in healthy and pathological conditions, simulate them for a better understanding and try to modulate them.

Psychological methods and statistics lab (Prof. Dr. Andrea Hildebrandt)

By applying and advancing multivariate statistical and psychometric modeling techniques, our research aims at better understanding individual differences in general cognitive functioning and social cognition. We develop and evaluate computerized test batteries rooted in experimental psychology for measuring human abilities and combine psychometric, neurometric (EEG, (f)MRI), molecular-genetic and hormonal assessments to investigate within- and between-person variations in cognition, emotion and personality. A special focus of our research is the processing of invariant and variant facial information – a basic domain of social cognition. We ask how are abilities in the social domain special as compared with cognitive processing in general. To this aim we investigate typically functioning individuals across the life span, including old age and pathological conditions. Beyond these goals, we enjoy contemplating about conceptual issues in psychological measurement.

Neuropsychology lab (Prof. Dr. Stefan Debener)

We use methods from experimental psychology and psychophysiology to study the relationship between the human brain and cognitive functions. One focus of our research is related to sensory deprivation and compensatory mechanisms. We study how hearing loss and deafness change the functional organization of the brain and what the consequences of these changes are for auditory rehabilitation. Related to this topic are studies investigating how information from different sensory modalities is combined to create a coherent percept of an object. Our key tool is high-density EEG, but we also use MEG, fMRI, and concurrent EEG-fMRI recordings. Because these tools provide us with complex, mixed signals that reflect different features of human brain function, we spend some time on the application and evaluation of signal un-mixing and signal integration procedures as well.

Neurocognition and functional neurorehabilitation group (Dr. Cornelia Kranczioch)

The research of the group is allocated at the intersection of neuropsychology and neurorehabilitation. In brief, we are interested in how the treatment of impairments resulting from central nervous disorders can benefit from neurocognitive approaches and theories. Our research currently focuses on using motor imagery, that is, the mental practice of movements, to support neurorehabilitation, for instance following stroke or in Parkinson’s disease. In close collaboration with the Neuropsychology lab we conduct studies in which we combine motor imagery training with lab-based or mobile neurofeedback setups. We run studies in healthy volunteers to learn more about the feasibility and the limitations of the neurofeedback applications. Just as important for the group is research aimed at learning more about motor imagery and motor cognition in the absence of neurofeedback. We strive to implement what we learn from these studies in our work with patients.
The second research focus of the group is the neurocognition of visual-temporal attention. Here we work mainly but not exclusively with RSVP paradigms such as the Attentional Blink. Among other things we compare brain activity (EEG, fMRI) in instances in which attention fails and in which it helps to successfully solve a task, or we study brain activity to better understand interindividual variations in task performance.

Neurophysiology of everyday life group (Dr. Martin Bleichner)

Unwanted sound, generally referred to as noise, is an environmental pollutant which may cause hearing loss. Additionally, noise also acts as an unspecific stressor with detrimental effects on biological and psychological processes: noise pollution has been associated with cardiovascular problems, sleep disturbance, and cognitive impairments. These harmful non-auditory effects of noise pollution typically only occur accumulated over time.

However, it is challenging to determine under which conditions environmental noise has adverse effects because whether a person perceives a sound as disturbing, annoying or stressful cannot be derived from the acoustic properties of the sound. Any particular sound, independent of its sound pressure level or other features, may be experienced as noise and, thus, can have negative consequences on well-being. Instead, how a sound is perceived depends on individual preferences, cognitive capacity, current occupation, and duration of exposure. Therefore, we need a perception based noise dosimetry that allows quantification of the perceived noise exposure for extended periods of time.

Recent developments in mobile electroencephalography (EEG) provide the possibility to study brain activity beyond the lab and thereby allow investigating how individuals perceive noise in everyday situations. Rather than monitoring the presence of noise, we can monitor the perceived noise exposure in the brain. In this research project, we want to use a combination of wireless EEG, concealed ear-centered electrode placement, and smartphone-based signal acquisition to study sound and noise perception in daily-life situations on an individual basis.

We will approach this topic in two parallel research lines. In the first research line, we will establish a relationship between EEG acquisition in the lab and in everyday situations. In the second research line, we will address individual noise perception and noise annoyance. On the one hand, we will work on overcoming the challenges involved in the acquisition and interpretation of EEG-signals that were acquired outside of the lab – this concerns signal artifacts and comparability to lab-based recordings. On the other hand, we will objectify the subjective noise disturbance in the lab and at the workplace. This takes place on three levels: subjective assessment, noise dosimetry and the recording of brain activity. Data obtained in the lab will be related to data obtained at the workplace. Our work will advance the field of mobile ear-centered EEG and will provide new insights on dealing with individual noise exposure.


Analysis of neural activity

Spike density functions were obtained by convolving the spike train with a function resembling a postsynaptic potential R(t) = [1 − exp(−tg)]·[exp(−td)], where τg is the time constant for the growth phase, and τd is the time constant for the decay phase. Physiological data from excitatory synapses indicate that 1 and 20 ms are optimum values for τg and τd, respectively (Sayer et al. 1990).

The average firing rate in cancelled stop-signal trials was compared with that in latency-matched no-stop-signal trials and noncancelled stop-signal trials as a function of time from the target presentation. To perform this time-course analysis, we subtracted the spike density function during cancelled stop-signal trials from the average spike density function during either latency-matched no-stop-signal trials or noncancelled stop-signal trials. The resulting spike density function is referred to as the differential spike density function. The time at which activity in the two conditions, when saccades were produced and when saccades were cancelled, began to diverge was defined as the instant when the differential spike density function exceeded 2 SDs of the difference in activity over the 200-ms interval before the target presentation, provided that this differential spike density function reached 6 SD and remained >2 SD for 50 ms.

The time of modulation of neurons was also determined using a receiver operating characteristic (ROC) analysis (Green and Swets 1966) as described elsewhere (Murthy et al. 2007, 2009). The spike density function from a set of at least five cancelled stop-signal trials was compared with the spike density function from a set of at least five either noncancelled stop-signal trials or latency-matched no-stop-signal trials. Spike trains from the original sets of trials were bootstrapped to construct 500 simulated spike trains in each set for reliable comparison. A simulated spike train was constructed by randomly selecting one trial from the set of original trials at every 1 ms time bin. If a spike occurred in that trial at that instant, the spike was added to the simulated spike train. Comparisons were conducted by calculating ROC curves for successive 1-ms bins starting at the time of target presentation and continuing until all saccades were initiated during one set of trials. The area under the ROC curve provides a quantitative measure of the separation between two distributions of activity. An area under the ROC curve value of 0.5 signifies that the two distributions are completely overlapped, whereas an extreme value of 0.0 or 1.0 signifies that the two distributions do not overlap. To describe the growth in the area under the ROC curve over time, the data were fit with a cumulative Weibull distribution function of the form W(t) = γ − (γ − δ)·exp[−(t/α) β ], where t is the time that ranges from when the area under the ROC curve attains its minimum to when the area under the ROC curve reaches its maximum. α is the time at which the area under the ROC curve reached the sum of 63.2% of its maximum value γ and 36.8% of its minimum value δ, β is the slope. The time of differential activity was determined from the growth of the ROC area over time and is defined as the time when the ROC area reached a value of 0.7.


What is all the noise about in interval timing?

Cognitive processes such as decision-making, rate calculation and planning require an accurate estimation of durations in the supra-second range—interval timing. In addition to being accurate, interval timing is scale invariant: the time-estimation errors are proportional to the estimated duration. The origin and mechanisms of this fundamental property are unknown. We discuss the computational properties of a circuit consisting of a large number of (input) neural oscillators projecting on a small number of (output) coincidence detector neurons, which allows time to be coded by the pattern of coincidental activation of its inputs. We showed analytically and checked numerically that time-scale invariance emerges from the neural noise. In particular, we found that errors or noise during storing or retrieving information regarding the memorized criterion time produce symmetric, Gaussian-like output whose width increases linearly with the criterion time. In contrast, frequency variability produces an asymmetric, long-tailed Gaussian-like output, that also obeys scale invariant property. In this architecture, time-scale invariance depends neither on the details of the input population, nor on the distribution probability of noise.

1. Introduction

The perception and use of durations in the seconds-to-hours range (interval timing) is essential for survival and adaptation, and is critical for fundamental cognitive processes such as decision-making, rate calculation and planning of action [1]. The classic interval timing paradigm is the fixed-interval (FI) procedure in which a subject's behaviour is reinforced for the first response (e.g. lever press) made after a pre-programmed interval has elapsed since the previous reinforcement. Subjects trained on the FI procedure typically start responding after a fixed proportion of the interval has elapsed despite the absence of any external time cues. A widely used discrete-trial variant of FI procedure is the peak-interval (PI) procedure [2,3]. In the PI procedure, a stimulus such as a tone or light is turned on to signal the beginning of the to-be-timed interval and in a proportion of trials the subject's first response after the criterion time is reinforced. In the remainder of the trials, known as probe trials, no reinforcement is given, and the stimulus remains on for about three times the criterion time. The mean response rate over a very large number of trials has a Gaussian shape whose peak measures the accuracy of criterion time estimation and the spread of the timing function measures its precision. In the vast majority of species, protocols and manipulations to date, interval timing is both accurate and time-scale invariant, i.e. time-estimation errors increase linearly with the estimated duration [4–7] (figure 1). Accurate and time-scale invariant interval timing was observed in many species [1,4] from invertebrates to fish, birds and mammals such as rats [8] (figure 1a), mice [11] and humans [9] (figure 1b). Time-scale invariance is stable over behavioural (figure 1b), lesion [12], pharmacological [13,14] (figure 1c) and neurophysiological manipulations [10] (figure 1d).

Figure 1. Accurate and time-scale invariant interval timing. (a) The response rate of rats timing a 30 s (left) or 90 s interval (right) overlap (centre) when the vertical axis is normalized by the maximum response rate and the horizontal axis by the corresponding criterion time redrawn from [8]. (b) Time-scale invariance in human subjects for 8 and 21 s criteria redrawn from [9]. (c) Systemic cocaine (COC) administration speeds-up timing proportional (scalar) to the original criteria, 30 and 90 s redrawn from [8]. (d) The hemodynamic response associated with a subject's active time reproduction scales with the timed criterion, 11 versus 17 s redrawn from [10]. An important feature of the output function is its asymmetry, which is clearly visible in (c). Although all output functions have a Gaussian-like shape they also present a long tail. (Online version in colour.)

One of the most influential interval timing paradigms assumes a pacemaker–accumulator clock (pacemaker-counter) and was introduced by Treisman [15]. According to Treisman [15], the interval timing mechanism that links internal clock to external behaviour also requires some kind of store of reference times and some comparison mechanism for time judgement. The model was rediscovered two decades later and became the scalar expectancy theory (SET) [5,16]. SET also assumes that interval timing emerges from the interaction of three abstract blocks: clock, accumulator (working or short-term memory) and comparator. The clock stage is a Poisson process whose pulses are accumulated in the working memory until the occurrence of an important event, such as reinforcement. At the time of the reinforcement, the number of clock pulses accumulated is transferred from the working (short-term) memory and stored in a reference (or long-term) memory. According to the SET, a response is produced by computing the ratio between the value stored in the reference memory and the current accumulator total. To account for the scalar property of interval timing, i.e. the variability of responses is roughly proportional to the peak time, Gibbon [17] showed that a Poisson distribution for the accumulator requires a time-dependent variance in the 'decision and memory factors as well as in the internal clock. These additional sources will be seen to dominate overall variance in performance’ (p. 191), emphasizing the important role of cognitive systems in time judgements. For such reasons, SET was considered more a general theory of animal cognition than strictly a theory of animal timing behaviour [18].

Another influential interval timing model is the behavioural timing (BeT) theory [19,20]. BeT assumes a ‘clock’ consisting of a fixed sequence of states with the transition from one state to the next driven by a Poisson pacemaker. Each state is associated with different classes of behaviour, and the theory claims these behaviours serve as discriminative stimuli that set the occasion for appropriate operant responses (although there is not a one-to-one correspondence between a state and a class of behaviours). The added assumption that pacemaker rate varies directly with reinforcement rate allows the model to handle some experimental results not covered by SET, although it has failed some specific tests (see [21] for a review).

A handful of neurobiologically inspired models explain accurate timing and time-scale invariance as a property of the information flow in the neural circuits [22,23]. Buonomano & Merzenich [24] implemented a neural network model with randomly connected circuits representing cortical layers 4 and 3 in order to mimic the temporal-discrimination task in the tens to hundreds of milliseconds range. Durstewitz hypothesized that the climbing rate of activity observed experimentally, e.g. from thalamic neurons recordings [25], may be involved in timing tasks [26]. Durstewitz [26] used a single-cell computational model with a calcium-mediated feedback loop that self-organizes into a biophysical configuration which generates climbing activity. Leon & Shadlen [27] suggested that the scalar timing of subsecond intervals may also be addressed at the level of single neurons, though how such a mechanism accounts for timing of supra-second durations is unclear. A solution to this problem was offered by Killeen & Taylor [28] who explained timing in terms of information transfer between noisy counters, although the biological mechanisms were not addressed.

Population clock models of timing are based on the repeatable patterns of activity of a large neural network that allow identification of elapsed time based on a ‘snapshot’ of neural activity [29,30]. In all population clock models, timing is an emergent property of the network in the sense that it relies on the interaction between neurons to produce accurate timing over a time-scale that far exceeds the longest firing period of any individual neuron. The first population clock model was proposed by Mauk and co-workers [7,31,32] in the context of the cerebellum. Such models consist of possible multiple layers of recurrently connected neural networks, i.e. networks of all-to-all coupled neurons that make it possible for a neuron to indirectly feedback onto itself [30]. Depending on the coupling strengths, the recurrent neural networks can self-maintain reproducible dynamic patterns of activity in response to a certain input. Such autonomous and reproducible patterns of neural activity could offer a reliable model for timing. Another advantage of the population clock models is that for weak couplings the network cannot self-maintain reproducible patterns of activity but instead produces input-dependent patterns of activity. Such a model was recently proposed for sensory timing [30]. Similar firing rate models were used by Itskov et al. [33] to design a large recurrently connected neural network that produced precise interval timing. By balancing the contribution of the deterministic and stochastic coupling strengths they showed that the first layer of such a population clock model can produce either a reproducible pattern of activity (associated with a timing ‘task’) or desynchronized pattern of activity that cannot keep track of long time-intervals (‘home cage’) [33]. The rate model of Itskov et al. [33] was also capable of extracting accurate interval timing information from a second layer with no recurrent excitation and only a global, non-specific recurrent inhibition. The second layer was driven by both the output of the previous layer (through sparse and random connections) and noise [33].

Finally, a quite different solution was offered by Meck and co-workers [4,34] (figure 2a), who proposed the striatal beat frequency (SBF) in which timing is coded by the coincidental activation of neurons, which produces firing beats with periods spanning a much wider range of durations than single neurons [35]. As Matell & Meck [34] suggested, the interval timing could be the product of multiple and complementary mechanisms. They suggested that the same neuroantomical structure could use different mechanisms for interval timing.

Figure 2. The neurobiological structures involved in interval timing and the corresponding simplified SBF architecture. (a) Schematic of some neurobiological structures involved in interval timing. The colour-coded connectivities among different areas emphasize appropriate neuromodulatory pathways. The two main areas involved in interval timing are frontal cortex and basal ganglia. (b) In our implementation of the SBF model, the states of the Nin cortical oscillators (input neurons) at reinforcement time T are stored in the reference memory as a set of weights wi. During test trials, the working memory stores the state of FC oscillators vi(t) and, together with the reference memory, projects its content onto Nout spiny (output) neurons of the BG. FC, frontal cortex MC, motor cortex BG, basal ganglia TH, thalamus. GPE, globus pallidus external GPI, globus pallidus internal STn, subthalamic nucleus SNc/r, substantia nigra pars compacta/reticulata VTA, ventral tegmental area Glu, glutamate DA, dopamine GABA, gamma-aminobutyric acid ACh, acetylcholine. (Online version in colour.)

Here, we showed analytically that in the context of the proposed SBF neural circuitry, time-scale invariance emerges naturally from variability (noise) in models' parameters. We also showed that time-scale invariance is independent of both the type of the input neuron and the probability distribution or the sources of the noise. We found that the criterion time noise produces a symmetric Gaussian output that obeys scalar property. On the other hand, the frequency noise produces an asymmetric Gaussian-like output with a long tail that also obeys scalar property.

2. The striatal beat frequency model

(a) Neurobiological justification of a striatal beat frequency model

Our paradigm for interval timing is inspired by the SBF model [4,34], which assumes that durations are coded by the coincidental activation of a large number of cortical (input) neurons projecting onto spiny (output) neurons in the striatum that selectively respond to particular reinforced patterns [36–38] (figure 2a).

(i) Neural oscillators

A key assumption of the SBF model is the existence of a set of neural oscillators able to provide the time base for the interval timing network. There is strong experimental evidence that oscillatory activity is a hallmark of neuronal activity in various brain regions, including the olfactory bulb [39–41], thalamus [42,43], hippocampus [44,45] and neocortex [46]. Cortical oscillators in the alpha band (8–12 Hz [47,48]) were previously considered as pacemakers for temporal accumulation [49], as they reset upon occurrence of the to-be-remembered stimuli [50]. In the SBF model, the neural oscillators are loosely associated with the frontal cortex (FC figure 2a).

(ii) Working and long-term memories

Among the potential areas involved in storing brain's states related to salient features of stimuli in interval timing trials are the hippocampus (see [51] and references therein) and the striatum, which we mimic in our simplified neural circuitry (figure 2a).

(iii) Coincidence detection with spiny neurons

Support for the involvement of the striato-frontal dopaminergic system in timing comes from imaging studies in humans [52–55], lesion studies in humans and rodents [56,57], and drug studies in rodents [58,59] all pointing towards the basal ganglia (BG) as having a central role in interval timing (see also [60] and references therein). Striatal firing patterns are peak-shaped around a trained criterion time, a pattern consistent with substantial striatal involvement in interval timing processes [61]. Lesions of striatum result in deficiencies in both temporal-production and temporal-discrimination procedures [62]. There are also neurophysiological evidences that striatum can engage reinforcement learning to perform pattern comparisons (reviewed by Sutton & Barto [63]). Another reason we ascribed the coincidence detection to medium spiny neurons is due to their bistable property that permits selective filtering of incoming information [64,65]. Each striatal spiny neuron integrates a very large number of afferents (between 10 000 and 30 000) [36,37,65], of which a vast majority (≈ 72%) are cortical [47,66].

(iv) Biological noise and network activity

The activity of any biological neural network is inevitably affected by different sources of noise, e.g. channel gating fluctuations [67,68], noisy synaptic transmission [69] and background network activity [70–72]. Single-cell recordings support the hypothesis that irregular firing in cortical interneurons is determined by the intrinsic stochastic properties (channel noise) of individual neurons [73,74]. At the same time, fluctuations in the presynaptic currents that drive cortical spiking neurons have a significant contribution to the large variability of the interspike intervals [75,76]. For example, in spinal neurons, synaptic noise alone fully accounts for output variability [75]. Additional variability affects either the storage (writing) or retrieval (reading) of criterion time to or from the memory [77,78]. Another source of criterion time variability comes from considerations of how animals are trained [79,80]. In this paper, we were not concerned with the biophysical mechanisms that generated irregular firing of cortical oscillators nor did we investigate how reading/writing errors of criterion time happened. We rather investigated whether the above assumed variabilities in the SBF model's parameters can produce accurate and time-scale invariant interval timing.

(b) Numerical implementation of a striatal beat frequency model

(i) Neural oscillators

Neurons that produce stable membrane potential oscillations are mathematically described as limit cycle oscillators, i.e. they pose a closed and stable phase space trajectory [81]. Because the oscillations repeat identically, it is often convenient to map the high-dimensional space of periodic oscillators using a phase variable that continuously covers the interval (0, 2π). Phase oscillator models have a series of advantages: (i) they provide analytical insights into the response of complex networks (ii) any neural oscillator can be reduced to a phase oscillator near a bifurcation point [82] and (iii) they allow numerical checks in a reasonable time. All neurons operate near a bifurcation, i.e. a point past which the neuron produces large membrane potential excursions—called action potentials [81].

In this SBF-sin implementation, the cortical neurons, presumably localized in the FC (figure 2a), are represented by Nin (input) phase oscillators with intrinsic frequencies fi(i = 1, , Nin) uniformly distributed over the interval (fmin, fmax), projecting onto Nout (output) spiny neurons [34] (figure 2b). A sine wave is the simplest possible phase oscillator that mimics periodic transitions between hyperpolarized and depolarized states observed in single-cell recordings. For analytical purposes, the membrane potential of the ith cortical neuron was approximated by a sine wave vi(t) = acos(2πfit), where a is the amplitude of oscillations. We also implemented an SBF-ML network in which the input neurons are conductance-based Morris–Lecar (ML) model neurons with two state variables: membrane potential and a slowly varying potassium conductance [83,84] (see electronic supplementary material, section A for detailed model equations).

(ii) Working and long-term memories

The memory of the criterion time T is numerically modelled by the set of state parameters (or weights) wij that characterize the state of cortical oscillator i during the FI trial j. In our implementation of the noiseless SBF-sin model, the weights , where Tj is the stored value of the criterion time T in the FI trial j. The state of FC oscillators i at the reinforcement time Tj was implemented as the normalized average over all memorized values Tj of the criterion time: where we used norm = such that the normalized weight is bounded |wi| ≤ 1 (figure 2b). We found no difference between the response of the SBF model with the above weights or the positively defined weight .

(iii) Coincidence detection with spiny neurons

The comparison between a stored representation of an event, e.g. the set of the states of cortical oscillators at the reinforcement (criterion) time wi, and the current state vi(t) of the same cortical oscillators during the ongoing test trial is believed to be distributed over many areas of the brain [85]. Based on neurobiological data, in our implementation of the striato-cortical interval timing network, we have a ratio of 1000 : 1 between the input (cortical) oscillators and output (spiny) neurons in the BG (figure 2b). The output neurons, which mimic the spiny neurons in the BG, act as coincidence detectors: they fire when the pattern of activity (or the state of cortical oscillators) wi(t) at the current time t matches the memorized reference weights wi. Numerically, the coincidence detection was modelled using the product of the two sets of weights:

The purpose of the coincidence detection given by equation (2.1) is to implement a rule that produces a strong output when the two vectors wi and vi(t) coincide and a weaker responses when they are dissimilar. Although there are many choices, such as sigmoidal functions (which involve numerically expensive calculations owing to exponential functions involved), we opted for implementing the simplest possible rule that would fulfil the above requirement, i.e. the dot product of the vectors wi and vi(t). Without reducing the generality of our approach, and in agreement with experimental findings [66], for analytical analyses, we considered only one output neuron (Nout = 1) in equation (2.1).

(iv) Biological noise and network activity

Two sources of variability (noise) were considered in this SBF implementation. (i) Frequency variability, which was modelled by allowing the intrinsic frequencies fi to fluctuate according to a specified probability density function pdff, e.g. Gaussian, Poisson, etc. Computationally, the noise in the firing frequency of the respective neurons was introduced by varying either the frequency, fi (in the SBF-sin implementation), or the bias current Ibias required to bring the ML neuron to the excitability threshold (in the SBF-ML implementation). (ii) Memory variability was modelled by allowing the criterion time T to be randomly distributed according to a probability density function pdfT.

3. Results

(a) No time-scale invariance in a noiseless striatal beat frequency model

In the absence of noise (variability) in the SBF-sin model, the output given by equation (2.1) for Nout = 1 is (see electronic supplementary material, section B for detailed calculations):

The width, σout, of the output function is determined from the condition that the output function amplitude at t = T + σout/2, i.e. out(t = T + σout/2), is half of its maximum possible amplitude, i.e. 1/2out(t = T). Based on equation (3.1), we predicted theoretically that in the absence of noise σout is independent of the criterion time and violates time-scale invariance (see electronic supplementary material, section B for detailed calculations).

To numerically verify the above predictions, the envelope of the output function of a noise-free SBF-sin model was fitted with a Gaussian whose mean and standard deviations were contrasted against the theoretically predicted values (figure 3a). The width of the envelope is constant regardless of the criterion time and it matches the theoretical prediction.

Figure 3. A noise-free SBF model does not produce time-scale invariance. Numerically generated output of a noise-free SBF-sin (a) and SBF-ML (b) model with Nin = 1000 for T = 10 and T = 30 s. As predicted, the width the output function of any noise-free SBF model is independent of the criterion time. The Gaussian envelopes are also shown with continuous line (for T = 10 s) and dashed line (for T = 30 s). (Online version in colour.)

The above result regarding the emergence of time-scale property from noise in the SBF-sin model can extend to any type of input neuron. Indeed, according to Fourier's theory, any periodic train of action potentials can be decomposed into discrete sine-wave components. It results that irrespective of the type of input neuron, a noise-free SBF model cannot produce time-scale invariant outputs. We verified this prediction by replacing the sine-wave oscillator inputs with biophysically realistic noise-free ML neurons (figure 3b). Numerical simulations confirmed that the envelope of the output function of the SBF-ML model can be reasonably fitted by a Gaussian (see [48,86,87]), but the width of the Gaussian output does not increase with the timed interval (figure 3b), thus violating the time-scale invariance (scalar property).

(b) Time-scale invariance emerges from criterion time noise in the striatal beat frequency model

Many sources of noise (variability) may affect the functioning of an interval timing network, such as small fluctuations in the intrinsic frequencies of the inputs, and in the encoding and retrieving the weights wi(T) by the output neuron(s) [34,35,86–88]. Here, we showed analytically that one noise source is sufficient to produce time-scale invariance [34,48]. Without compromising generality, in the following, we examined the role of the variability in encoding and retrieval of the criterion time by the output neuron(s). The cumulative effect of all noise sources (trial-to-trial variability, neuromodulatory inputs, etc.) on the memorized weights wi was modelled by the stochastic variable Tj distributed around T according to a given pdfT. For Nout = 1, the output function given by equation (2.1) becomes (see electronic supplementary material, section C for detailed calculations):

(c) Particular case: infinite frequency range and time-scale invariance in the presence of Gaussian noise affecting the memorized criterion time

Although we already showed that the output function for the SBF-sin model and arbitrary pdfT for the criterion time noise is always Gaussian, produces accurate interval timing and obeys scalar property, it is illuminating to grasp the meaning of the theoretical coefficients in our general result by investigating a biologically relevant particular case. If the criterion time is affected by a Gaussian noise with zero mean and standard deviation σT, then one can show that (see electronic supplementary material, section D for detailed calculations), in the limit of a very large pool (theoretically infinite) of inputs, the output function of the SBF-sin model is

The output function given by equation (3.3) with the physically realizable term centred at t = T: (i) has a Gaussian (as predicted by the central limit theorem), (ii) peaks at t = T, i.e. produces accurate timing and (iii) has a standard deviation

(d) Particular case: finite frequency range and time-scale invariance in the presence of Gaussian noise affecting the memorized criterion time

In our previous numerical implementations of the SBF model [48,86,87], the frequency range was finite and coincides with α band (8–12 Hz). Is the SBF model still performing accurate and scalar interval timing under such a strong restriction? For a finite range of frequencies (fmin < f < fmax) with a very large number of FC oscillators Nin, a more realistic estimation of the output function from equation (3.2) is (see electronic supplementary material, section E for detailed calculations):

We used the SBF-sin implementation to numerically verify our theoretical prediction that σout = f over multiple trials (runs) of this type of stochastic process and for different values of T. The output functions (see continuous lines in figure 4a) for T = 10 s and T = 30 s are reasonably fitted by Gaussian curves. Our numerical results show a linear relationship between σout of the Gaussian fit of the output and T. We found that the resultant slope of this linear relationship matched the theoretical prediction given by σout = f. For example, for σT = 10% the average slope was 11.3% ± 4.5% with a coefficient of determination of R 2 = 0.93, p < 10 4 . We also found that for the SBF-ML the width of the Gaussian envelope increases linearly with the criterion time (figure 4b). For example, figure 4c shows the slope of the standard deviation σout versus criterion time for different values of the standard deviation of the Gaussian noise. Figure 4c shows not only that the scalar property is valid, but it also shows that as we predicted theoretically. Indeed, for σT = 0.05 the numerically estimated proportionality constant is 0.068 (filled squares in figure 4c, R 2 = 0.97) for σT = 0.1 the slope is 0.129 (filled circles in figure 4c, R 2 = 0.96) and for σT = 0.2 the slope is 0.25 (filled triangles in figure 4c, R 2 = 0.96).

Figure 4. Time-scale invariance emerges from criterion time noise in the SBF model. (a) Time-scale invariance emerges spontaneously in a noisy SBF-sin model here, the two criteria are T = 10 and T = 30 s. The output functions (thin continuous lines) were fitted with Gaussian curves (thick continuous line for T = 10 s and dashed line for T = 30 s) in order to estimate the position of the peak and the width of the output function. In an SBF-sin model, the standard deviation of the output function increases linearly with the criterion time. (b) In an SBF-ML implementation, the output function still has a Gaussian shape (owing to central limit theorem) and its width increases with criterion time. (c) Numerical simulations confirm that the standard deviation of the output function σout increases linearly with the criterion time T, which is the hallmark of time-scale invariance. Furthermore, for Gaussian memory variance we also found that σout is proportional to the standard deviation of the noise σT. (Online version in colour.)

(e) Time-scale invariance emerges from frequency variance during probe trials in the striatal beat frequency model

In addition to memory variance, frequency fluctuations owing to stochastic channel noise or background networks activity has received considerable attention. Here, we considered only frequency variability during the probe trial and assumed that there was no frequency variability during the FI procedure while the weights wi were memorized. We also assumed that there is no variability in the memorized criterion time, because its effect on interval timing was already addressed in §3d.

The cumulative effect of all noise sources on the firing frequencies during the probe trials was modelled by the stochastic variable fij distributed around the frequency fi according to a given pdff. Based on equation (2.1) with Nout = 1, the output function term centred around t = T becomes (see electronic supplementary material, section F for detailed calculations):

Based on the central limit theorem, the output function given by equation (3.7), which is the sum of a (very) large number Ntrials of stochastic variables fij, is always a Gaussian, regardless of the pdff. We used the average value of the stochastic equation (3.7) to estimate the output function and found that (see electronic supplementary material, section F for detailed calculations) it is always: (i) Gaussian (based on the central limit theorem), (ii) peaks at t0f = T/(1 + γf) ≈ T and (iii) has a standard deviation σout that increases linearly with the criterion time T

(f) Particular case: infinite frequency range and time-scale invariance in the presence of Gaussian noise affecting oscillators’ frequencies during probe trials

As in §2e, we used a Gaussian distribution pdff to explicitly compute the theoretical coefficients in the above general result. Briefly, by replacing the stochastic frequencies fij with an appropriate Gaussian distribution fi(1 + Gauss(0, σf)j), we found that the output function is (see electronic supplementary material, section G for detailed calculations):

which looks like a Gaussian with a very long tail (figure 5a) and peaks at . The skewness of the output function increases with the standard deviation of the frequency noise σf. For t < t0f, the half-width Δτ1 increases with the standard deviation of the frequency noise σf, although at a much slower rate than Δτ2 for t > t0f (figure 5b). This fact is reflected in a faster than linear increase of the Δτ2τ1 against σf (figure 5c). The quadratic fit over the entire σf [0, 1] shown in figure 5c is given by Δτ2τ1 = (0.902 ± 0.007) + (3.74 ± 0.03)σf + (−1.27 ± 0.03)σf 2 with an adjusted R 2 = 0.999. For reasonable standard deviation of the frequency noise σf < 0.5, we found that with an adjusted R 2 = 0.999. As the output function given by equation (3.9) is no longer symmetric with respect to the peak located at , the width of the output function is given , where x1 and x2 are the solutions of the half-width equation out(x) = 1/2out(1). We found (figure 5d) that the width of the output function σout with an adjusted R 2 = 0.9999 over the entire range σf ∈ [0, 1]. A reasonable approximation for standard deviation of the frequency noise σf < 0.5 is linear σout = (0.019 ± 0.003) + (2.20 ± 0.01)σf with an adjusted R 2 = 0.999. As a result, in the presence of frequency variability during probe trials, we predict theoretically that the SBF model (i) produces a Gaussian-like output function with a long tail, (ii) produces accurate interval timing (the output function is centred on ) and (iii) obeys scalar property with . We also noted that the peak time predicted for an arbitrary pdff, i.e. t0f = T/(1 + γf) is identical with the peak time in the particular case of Gaussian noise, i.e. if for .

Figure 5. Frequency noise produces a skewed Gaussian-like output with a long tail—theoretical predictions. (a) Theoretically predicted output function for Gaussian noise affecting the oscillators' frequencies is a skewed Gaussian-like output. The normalized output function is plotted against the normalized time t/t0f, where t0f is the time marker for the peak of the output function. The skewness is measured by the two corresponding half-widths τ1 and τ2. (b) Although both half-widths increase with the standard deviation of the frequency noise, the long tail of the output is determined by the very fast increase of τ2. (c) A quantitative measure of the skewness is the ratio τ2/τ1, which increases faster than linearly with σf. (d) The width of the output function σout also increases faster than linearly with σf. (Online version in colour.)

(g) Particular case: finite frequency range and time-scale invariance in the presence of Gaussian noise affecting oscillators’ frequencies during probe trials

For a finite range of frequencies (fmin < f < fmax) with a very large number of FC oscillators Nin, a more realistic estimation of the output function from equation (3.7) is (see electronic supplementary material, section H for detailed calculations):

A significant difference between equation (3.9), which is valid in the limit of an infinite frequency range of the FC oscillators, and equation (3.10), which takes into consideration that there is always only a finite frequency range of the FC oscillators, is the frequency-dependent factor in the output function represented by the difference of the two Erf() functions. The output function in equation (3.10) resembles a Gaussian with a long tail and obeys time-scalar invariance property.

We used both the SBF-sin and SBF-ML implementations to numerically verify that (i) the output function resembles a Gaussian with a long tail (figure 6a), and (ii) the width of the output function increases linearly with the criterion time (figure 6b). The output functions of the SBF-ML implementation (see thin continuous lines in figure 6a) for T = 10 s and T = 30 s are reasonably fitted by Gaussian curves (see thick continuous line for T = 10 s and dashed line for T = 30 s in figure 6a). However, as predicted theoretically, the output has a long tail. The scalar property is indeed valid, because the width of the output function linearly increase with the criterion time (figure 6b).

Figure 6. Frequency noise produces a skewed Gaussian-like output with a long tail in numerical results. (a) The output function (thin continuous lines) of the SBF-ML model for T = 10 and T = 30 s has a Gaussian shape and its peak can be reasonable localized by a Gaussian fit (thick continuous line for T = 10 and dashed line for T = 30 s). The effect of frequency noise is the asymmetric output function that has a long tail. (b) The width of the output function increases linearly with the criterion time and obey the time-scale invariance property. (Online version in colour.)

Furthermore, we checked that the scalar property holds not only for Gaussian noise, which allowed us to determine an analytic expression for the long-tailed output function in §4f, but also for uniform and Poisson noise.

4. Discussion

Computational models of interval timing vary largely with respect to the hypothesized mechanisms and the assumptions by which temporal processing is explained, and by which time-scale invariance or drug effects are explained. The putative mechanisms of timing rely on pacemaker/accumulator processes [5,6,89,90], sequences of behaviours [20], pure sine oscillators [8,34,91,92], memory traces [21,93–97] or cell and network-level models [27,98]. For example, both neurometric functions from single neurons and ensembles of neurons successfully paralleled the psychometric functions for the to-be-timed intervals shorter than 1 s [27]. Reutimann et al. [99] also considered interacting populations that are subject to neuronal adaptation and synaptic plasticity based on the general principle of firing rate modulation in a single cell. Balancing long-term potentiation (LTP) and long-term depression (LTD) mechanisms are thought to modulate the firing rate of neural populations with the net effect that the adaptation leads to a linear decay of the firing rate over time. Therefore, the linear relationship between time and the number of clock ticks of the pacemaker–accumulator model in SET [5] was translated into a linearly decaying firing rate model that maps time and variable firing rate.

By and large, to address time-scale invariance, current behavioural theories assume convenient computations, rules or coding schemes. Scalar timing is explained as either deriving from computation of ratios of durations [5,6,100], adaptation of the speed at which perceived time flows [20] or from processes and distributions that conveniently scale up in time [21,91,93,95,96]. Some neurobiological models share computational assumptions with behavioural models and continue to address time-scale invariance by specific computations or embedded linear relationships [101]. Some assume that timing involves neural integrators capable of linearly ramping up their firing rate in time [98], whereas others assume LTP/LTD processes whose balance leads to a linear decay of the firing rate in time [99]. It is unclear whether such models can account for time-scale invariance in a large range of behavioural or neurophysiological manipulations.

Neurons are often viewed as communications channels that respond even to the precisely delivered stimuli sequence in a random manner consistent with Gaussian noise [102]. Biological noise was shown to play important functional roles, e.g. enhance signal detection through stochastic resonance [103,104] and stabilize synchrony [105,106]. Firing rate variability in neural oscillators also results from ongoing cortical activity (see [106,107] and references therein), which may appears noisy simply because it is not synchronized with obvious stimuli.

A possible common ground for all interval timing models could be the threshold accommodation phenomenon that allows stimulus selectivity [108,109] and promotes coincidence detection [11]. Farries [110] showed that dynamic threshold change in subthalamic nucleus (STn) that projects to the output nuclei of the BG allows STn to act either as an integrator for rate code inputs or a coincidence detector [110] (figure 2). Interestingly, under both conditions, faulty (noisy) processing explains time-scale invariance. For example, Killeen & Taylor [28] explained scale invariance of counting in terms of noisy information transfer between counters. Similarly, here, we explained time-scale invariance of timing in terms of noisy coincidence detection during timing. Therefore, it seems that when BG acts either as a counter or as coincidence detector, neural noise alone can explain time-scale invariance.

Our theoretical predictions based on an SBF model show that time-scale invariance emerges as the property of a (very) large and noisy network. Furthermore, we showed that the output function of an SBF mode always resembles the Gaussian shape found in behavioural experiments, regardless of the type of noise affecting the timing network. We showed analytically that in the presence of arbitrary criterion variability alone the SBF model produces an output that (i) has a symmetric and Gaussian shape, (ii) is accurate, i.e. the peak of the output is located at t0T = T(1 + γT), where is a constant that depends on the type of memory noise and (iii) has a width that increases linearly with the criterion time, i.e. obeys time-scale invariance property. The memory variability is ascribed to storing or retrieving the representation of criterion time to and from the long-term memory (figure 2b). We also showed analytically and verified numerically that for a Gaussian noise affecting the memory of the criterion time the output function of SBF-sin model is analytic and its peak is at t0T = T, which means that for Gaussian noise γt = 0 (figure 4a). All of the above properties were also verified by replacing phase oscillators with biophysically realistic ML model neurons (figure 4b,c).

We also showed analytically that, in the presence of arbitrary frequency variability alone, the SBF model produces an output that (i) has a Gaussian-like shape (based on the central limit theorem, (ii) is accurate, i.e. the peak of the output is located at t0f = T/(1 + γf), where is a constant that depends on the type of frequency noise and (iii) has a width σout = T(1+γT)σf that increases linearly with the criterion time, i.e. obeys time-scale invariance property. In the presence of Gaussian noise, the output function is analytic, asymmetric and Gaussian-like (figure 5a) with a skewness that increases quadratically with the standard deviation of the frequency noise (figure 5b). In addition to the fact that the standard deviation of the output function is proportional to the criterion time and, therefore, obeys the time-scale invariance property, it also increases quadratically with the standard deviation of the frequency noise (figure 5d). For Gaussian noise, the peak of the asymmetric, long-tailed Gaussian-like output (figure 5a) resembles experimental data that show a strong long tail in subjects' responses (figure 1c).

Our results regarding the effect of noise on interval timing support and extend the speculation [34] by which an SBF model requires at least one source of variance (noise) to address time-scale invariance. Rather than being a signature of higher-order cognitive processes or specific neural computations related to timing, time-scale invariance naturally emerges in a massively connected brain from the intrinsic noise of neurons and circuits [4,27]. This provides the simplest explanation for the ubiquity of scale invariance of interval timing in a large range of behavioural, lesion and pharmacological manipulations.


Neurometric amplitude-modulation detection threshold in the guinea-pig ventral cochlear nucleus

Amplitude modulation (AM) is a key information-carrying feature of natural sounds. The majority of physiological data on AM representation are in response to 100%-modulated signals, whereas psychoacoustic studies usually operate around detection threshold (∼5% AM). Natural sounds are characterised by low modulation depths (<<100% AM).

Recording from ventral cochlear nucleus neurons, we examine the temporal representation of AM tones as a function of modulation depth. At this locus there are several physiologically distinct neuron types which either preserve or transform temporal information present in their auditory nerve fibre inputs.

Modulation transfer function bandwidth increases with increasing modulation depth.

Best modulation frequency is independent of modulation depth.

Neural AM detection threshold varies with unit type, modulation frequency, and sound level. Chopper units have better AM detection thresholds than primary-like units. The most sensitive chopper units have thresholds around 3% AM, similar to human psychophysical performance.

Abstract Amplitude modulation (AM) is a pervasive feature of natural sounds. Neural detection and processing of modulation cues is behaviourally important across species. Although most ecologically relevant sounds are not fully modulated, physiological studies have usually concentrated on fully modulated (100% modulation depth) signals. Psychoacoustic experiments mainly operate at low modulation depths, around detection threshold (∼5% AM). We presented sinusoidal amplitude-modulated tones, systematically varying modulation depth between zero and 100%, at a range of modulation frequencies, to anaesthetised guinea-pigs while recording spikes from neurons in the ventral cochlear nucleus (VCN). The cochlear nucleus is the site of the first synapse in the central auditory system. At this locus significant signal processing occurs with respect to representation of AM signals. Spike trains were analysed in terms of the vector strength of spike synchrony to the amplitude envelope. Neurons showed either low-pass or band-pass temporal modulation transfer functions, with the proportion of band-pass responses increasing with increasing sound level. The proportion of units showing a band-pass response varies with unit type: sustained chopper (CS) > transient chopper (CT) > primary-like (PL). Spike synchrony increased with increasing modulation depth. At the lowest modulation depth (6%), significant spike synchrony was only observed near to the unit's best modulation frequency for all unit types tested. Modulation tuning therefore became sharper with decreasing modulation depth. AM detection threshold was calculated for each individual unit as a function of modulation frequency. Chopper units have significantly better AM detection thresholds than do primary-like units. AM detection threshold is significantly worse at 40 dB vs. 10 dB above pure-tone spike rate threshold. Mean modulation detection thresholds for sounds 10 dB above pure-tone spike rate threshold at best modulation frequency are (95% CI) 11.6% (10.0–13.1) for PL units, 9.8% (8.2–11.5) for CT units, and 10.8% (8.4–13.2) for CS units. The most sensitive guinea-pig VCN single unit AM detection thresholds are similar to human psychophysical performance (∼3% AM), while the mean neurometric thresholds approach whole animal behavioural performance (∼10% AM).


Watch the video: What is NEUROMETRICS? What does NEUROMETRICS mean? NEUROMETRICS meaning u0026 explanation (June 2022).


Comments:

  1. Chatwyn

    Some kind of bad taste

  2. Aswan

    I apologize, it doesn't come close to me. Are there other variants?

  3. Tearlach

    In your place it would be the opposite.



Write a message