Towards an Atlas of Human Neocortical Oscillations

•December 24, 2014 • 2 Comments

The electroencephalogram (EEG) is one of the most popular measures of human brain function and is currently the most clinically utilized. The EEG primarily reflects the synaptic activity of patches of neocortical pyramidal cells (at minimum

Fig. 1: Single EEG channels illustrating different types of oscillations

Fig. 1: Single EEG channels illustrating different types of oscillations

probably around the order of 25 mm2–Baillet, Mosher, & Leahy, 2001) and its most salient feature is intermittent oscillations at various frequencies (Fig. 1), which vary according to brain state (e.g., awake vs. slow wave sleep) and cortical area/scalp location. The most conventional frequency bands are: delta [1–4 Hz], theta [4– 8 Hz], alpha/mu [8–13 Hz], beta [13–30 Hz], gamma [30–80 Hz], and high gamma [80–150 Hz]. However, the boundaries between bands are not well defined and can vary somewhat depending on whom you ask.

Since these oscillations are often larger when an individual is relatively inactive (e.g., eyes closed) some have interpreted them as the product of idling neurons with no functional importance. However, there is increasing evidence that these oscillations serve an important role in functions such as mediating communication between brain areas and encoding neural representations. Moreover, these rhythms are useful for identifying macroscale brain networks and are clinically useful for detecting brain abnormalities (e.g., tumors), which can disrupt them.


Fig. 2: [Left] Eyes-open resting state EEG spectral power density (SPD) averaged across data from 7 neurotypical individuals (unpublished data). Each line is the SPD at a different electrode. [Right] The topography of the PSD at 10.5 Hz.


Fig. 3: Same as Fig. 2 but topography is the SPD at 5.4 Hz.

Decades of research have well characterized typical oscillatory activity at the scalp. The EEG power spectrum follows a roughly 1/f distribution with a peak in the alpha band around 10 Hz that is largest over the back of the head (Fig 2; Fig 4-bottom left) but apparent at all standard EEG electrode scalp locations. Note that alpha band oscillations over sensory-motor regions, called “mu” rhythms, are present but are dwarfed by the much stronger posterior alpha oscillations. A second, much smaller peak in the theta range is also observed at frontal-midline electrodes (Fig 3; Fig 4-upper right). Peaks are generally not clearly observed in the other frequency ranges. Delta power is broadly distributed but greatest medially and centrally (Fig 4-upper left) and beta power is even more uniformly distributed (Fig 4-bottom right).


Fig. 4: Topographies of resting EEG spectral amplitude density averaged across 43 individuals in delta [upper left], theta [upper right], alpha [lower left], and beta [lower right] frequency bands (Maurer & Dierks, 1991). It was actually surprisingly hard for me to find a published figure illustrating this.

For medical reasons, EEG is also sometimes recorded intracranially with electrodes below the dura mater or with penetrating “depth” electrodes. Although intracranial EEG (iEEG) oscillations have been studied since the early 20th century (e.g., Penfield & Jasper, 1954), there has been little quantitative research that characterizes the types and distribution of iEEG oscillations. This is most likely due to the difficulty of precisely identifying the location of iEEG electrodes and mapping those locations to a standard brain, which enables combining data across individuals. Contemporary neuroimaging has solved these problems, and our research group recently attempted to develop a quantitative atlas of iEEG oscillations using data obtained from patients undergoing evaluation for epilepsy surgery (Groppe et al., 2013). We found some surprising differences with what is observed in scalp EEG.


Fig. 5: Histogram of SPD peaks of resting state intracranial electrode data collapsed across all electrodes and participants. Note that Hertz is scaled logarithmically and the high proportion of peaks above 40 Hz is due to the flattening of the SPD distribution, which makes it difficult to distinguish true peaks from noise.


Fig. 6: Mean SPDs of seven clusters.


Fig. 7: Proportion of electrodes in each cortical area (as defined by the Desikan-Killiany atlas) belonging to each cluster. Electrodes from both hemispheres have been combined to increase the number of electrodes per area. Only areas covered by more than one electrode (out of all 1208) are colored. All other areas are shown in gray.

The first difference is that alpha oscillations are not the most dominant type of rhythm. If you simply look at what frequencies exhibit peaks in each electrode’s power spectrum, you find that theta peaks are the most common with a mode at 7 Hz (Fig. 5). Less frequent modes are also apparent at 3, 9, and 15 Hz. To get a sense of the cortical topography of these oscillations, we first performed k-means cluster analysis and found that there were seven types of spectral power densities (Fig. 6) and then computed the proportion of electrodes in each of 35 cortical areas that belonged to each cluster (Fig. 7). The 3 Hz delta cluster is mostly frontal and temporal, including the temporal-parietal junction. The three theta clusters have somewhat complementary distributions: 5 Hz is mostly frontal and basal temporal, the strongly peaked 7 Hz cluster is mostly temporal and lateral parietal, and the weakly peaked 7 Hz cluster is occipital and medial parietal. The alpha 10 Hz cluster is largely limited to parietal and occipital areas. The beta cluster, as expected is strongest over the peri-central gyri. Somewhat surprisingly, it also extends frontally across the middle frontal gyrus and pars opercularis. The last cluster that lacks strong spectral peaks is primarily found over basal temporal and medial areas. Finally, it is worth noting that although no cluster emerged with peaks in the high gamma range (80–150 Hz), a handful of channels in a few patients did exhibit high gamma peaks. There has been some debate as to whether or not high gamma activity reflects true oscillations or simply a general elevation in the 1/f distribution (Crone, Korzeniewska, & Franaszczuk, 2011). These few channels suggest that some brain areas can elicit oscillatory activity in this frequency band, though this activity is uncommon.


Fig. 8: Locations of all 1208 electrodes on average cortical surface. Electrodes are color coded to indicate cluster membership (see Fig. 6).

In addition to the average distribution of these oscillations, it is important to consider how much variation there is within cortical areas and across subjects. Figure 8 shows the location of electrodes collapsed across all patients and color coded according to power spectrum type. Although some spectrum types clearly cluster together in areas, there is considerable variation even in the most homogenous regions (e.g., the pre- and post-central gyri). This is mostly due to individual variation in oscillatory activity. Some individual variation is surely due to factors such as drowsiness, medications, and age that are difficult to control for when acquiring intracranial EEG data. However, I suspect that most of the variation reflects individual differences in functional architecture, which we know from non-invasive imaging of neuro-typical individuals is substantial (e.g., Mueller et al., 2013). If I’m correct, understanding this individual variability will be key to understanding both the underlying processes that generate these oscillations and their importance for brain function.

-David Groppe

Baillet, S., Mosher, J., & Leahy, R. (2001). Electromagnetic brain mapping IEEE Signal Processing Magazine, 18 (6), 14-30 DOI: 10.1109/79.962275

Crone NE, Korzeniewska A, & Franaszczuk PJ (2011). Cortical γ responses: searching high and low. International Journal of Psychophysiology, 79 (1), 9-15 PMID: 21081143

Groppe DM, Bickel S, Keller CJ, Jain SK, Hwang ST, Harden C, & Mehta AD (2013). Dominant frequencies of resting human brain activity as measured by the electrocorticogram. NeuroImage, 79, 223-33 PMID: 23639261

Maurer, K.; Dierks, T. (1991). Atlas of Brain Mapping Springer

Mueller, S., Wang, D., Fox, M., Yeo, B., Sepulcre, J., Sabuncu, M., Shafee, R., Lu, J., & Liu, H. (2013). Individual Variability in Functional Connectivity Architecture of the Human Brain Neuron, 77 (3), 586-595 DOI: 10.1016/j.neuron.2012.12.028

Penfield, W., & Jasper, H. (1954). Epilepsy and the Functional Anatomy of the Human Brain Boston: Little, Brown., 47 (7) DOI: 10.1097/00007611-195407000-00024

Prototypical EEG Artifact Independent Components

•March 29, 2014 • Leave a Comment

For better or worse, I wrote a traditional dissertation instead of a more practical “staple” dissertation composed primarily of journal articles. Hence there are some potentially useful things in it that will likely never be read beyond my committee and, perhaps, people willing to search the thing piecemeal on Google Books. Perhaps this blog can improve the situation a bit.

Anyway, for my thesis I had the fortune of working with Scott Makeig, who pioneered the use of ICA for EEG analysis (Makeig, Bell, Jung, & Sejnowski, 1996). ICA is the best general-purpose tool out there for correcting for EEG artifacts such as blinks and muscle activity. Artifact correction in general isn’t perfect (Groppe, Makeig, & Kutas, 2008), but in practice ICA artifact correction works quite well (Mognon, Jovicich, Bruzzone, & Buiatti, 2011).

One issue facing ICA newbies though is learning to recognize which independent components (ICs) represent artifacts and which reflect true neural activity. In case it helps here are few prototypical examples of artifact ICS. In each figure is the IC’s scalp topography, activity ERPimage, and power spectrum. If you load an EEGLAB dataset into EEGLAB, you can make a plot like this for each IC using Plot->Component properties in the EEGLAB GUI menu (see below).

How to make plots of IC properties like those below using the EEGLAB GUI.

How to make plots of IC properties like those below using the EEGLAB GUI.

Blink IC

An IC corresponding to blink activity. The topography and large monophasic humps in the ERPimage are the most characteristic features of this artifact. Usually these ICs in EEGLAB have low numbers (like IC 1) because their magnitude at the scalp is so large (EEGLAB sorts ICs from biggest to smallest scalp magnitude–more or less).

Horizontal Eye Movement IC

This IC corresponds to horizontal eye movements. Note that the polarity of the IC flips across the eyes and the ERPimage typically has square wave-like jumps. These ICs typically have low numbers as well.


An IC corresponding to muscle activity (EMG). These ICs typically have a lot of power in the 20-50 Hz range and focal topographies that project to only a few neighboring electrodes. Sometimes they flip polarity across neighboring electrodes.

heart beat mastoid artifact

If a mastoid reference electrode is near an artery, the artery will move the electrode producing monophasic, brief blips that occur about once a second. This IC captures that artifact as apparent by the blips in the ERPimage and the topography with a strong right mastoid weight.

-David Groppe


Makeig, S., Bell, A. J., Jung, T.-P., & Sejnowski, T. J. (1996). Independent component analysis of electroencephalographic data Advances in Neural Information Processing System, 145-151

Groppe, D. M., Makeig, S., & Kutas, M. (2008). Independent component analysis of event-related potentials. Cognitive Science Online, 6 (1), 1-44

Mognon A, Jovicich J, Bruzzone L, & Buiatti M (2011). ADJUST: An automatic EEG artifact detector based on the joint use of spatial and temporal features. Psychophysiology PMID: 20636297

The Latest in EEG Art

•March 25, 2014 • Leave a Comment

From time to time brainwaves make waves (or at least ripples) in the art world. I stumbled across a couple of examples last week. The first is Lucas Maassen’s “Brain Wave Sofa,” a sofa whose shaped was produced from Maassen’s EEG when he thought the word “comfort.” The work “is a tongue-in-cheek reference to a futuristic production workflow in which the designer only has to close his eyes and a computer ‘prints’ the result out as a functional form.” It is currently on display at New York’s Museum of Art and Design. The text at the museum describing the couch is a bit misleading in that it suggests that a very different couch would have been produced had Maassen thought of a different word. He would have gotten a similar looking couch no matter what he was thinking. However, if the designer would have blinked once or twice, he could have added a backrest.


Lucas Maassen’s “Brain Wave Sofa” was produced from three minutes of his EEG and a CNC milling machine. It has lots of dimples that would be popular with cats.

The other piece of EEG creativity is the “brain stethoscope,” an invention of Stanford’s Chris Chafe and Josef Parvizi. Chafe has made a name for himself turning all sorts of data (e.g., CO2 levels near ripening tomatoes) into music. Parvizi, a neurologist and neuroscientist, teamed up with Chafe to musicalize EEG. If you simply turn EEG into an audio file, it doesn’t sound like much because most of the signal is at frequencies below what our auditory system is sensitive to. If you speed up raw EEG to make it more audible, it sounds mostly like wind. To make the structure of EEG more apparent to the ear, Chafe extrapolated features from the EEG and fed them through a music synthesizer. The result sounds like an insect annoying a cyborg (click on the video below to hear). While this might not set your toes a-tapping, the inventors hope it could be of clinical use, helping physicians unskilled in reading EEG to better identify abnormalities such as seizures.

-David Groppe

Party science: You can’t judge a wine by its bottle/price tag

•February 26, 2014 • 2 Comments


When planning our wedding a few years ago, a guidebook recommended that we provide modestly priced wine, citing blind tasting experiments that have found no correlation between people’s enjoyment of wine and its price (at least in people with no training in wine).  This matches my subjective experience, but ever the scientist and looking for an excuse to have people over, I organized a blind tasting with some friends to see if we could replicate the results. I purchased four bottles of red wine ranging in price from $3 to $45 (Figure 1). The most expensive and cheapest bottles were cabernet sauvignons, with a zinfandel and pinot noir in the middle to provide some variety.


Fig 1: Our four contenders: Trader Joe’s Charles Shaw Blend (Cabernet Sauvignon, $3), Trader Joe’s Grower’s Reserve (Zinfandel. $6), Layer Cake (Pinot Noir, $14), Stag’s Leap Wine Cellars Artemis (Cabernet Sauvignon, $45).

The most expensive was from Stag’s Leap, a Napa Valley winery that is famous for beating French wines in a highly publicized “Judgment of Paris” tasting in 1976. The victory established California wines as world class and earned the winning bottle of wine a place at the Smithsonian National Museum of American History. The cheapest was Trader Joe’s Charles Shaw Blend, also known as “Two-Buck Chuck,” which is famous for being extremely cheap. All four bottles were purchased at the one Trader Joe’s in Manhattan that is allowed to sell wine.

My six volunteer tasters were asked to rate each wine’s quality on a scale of 1 to 10, with 10 being best. They were also asked match the wine to its grape type (Cab, Pinot, or Zin) and to its description on the bottle. The descriptions were as follows:

  • Stag’s Leap: “…a wine with lush fruit flavors balanced by structure and elegance.”
  • Layer Cake: “…like a great layer cake, fruit, mocha and chocolate, hints of spice and rich, always rich.”
  • Trader Joe’s Grower’s Reserve: “…mouth-filling flavors of ripe berry and plum. Full-bodied with peppery overtones, this excellent red wine compliments full flavored spicy dishes.”
  • Charles Shaw: “I cost $2.99.” (I actually had to make this up as the label didn’t have a description on it.)


Figure 2 shows each taster’s rating of the four wines. Indeed there is no clear correlation between cost and subjective quality. One way to quantify this is to measure the rank correlation between each individual’s rating and the wine costs and then to compare the mean correlation against a null hypothesis of 0. Using the rank correlation metric Kendall’s tau, there is actually a tendency for a positive correlation (mean tau=0.23, t(5)=1.64, p=0.16, d=0.67, 95% CI: -0.13 to 0.58). A possible confound of this analysis is that different types of grapes may be significantly affecting subjective preference. To control for this, I compared just the two cabernets (Figure 3). Again, there is a tendency for the more expensive wine to be rated better, but the difference is small and far from reliable (t(5)=0.34, p=0.75, d=0.14).


Fig. 2: Each of the six taster’s rating of wine quality as a function of wine price. Individual ratings cost positions are slightly jittered to minimize overlap. Red bars indicate mean rating. Pink bars indicate 95% confidence intervals (t-distribution assumed). Purple bars indicate +/- one standard deviation.

Fig 2:

Fig 3: [Left] Quality ratings of the two wines made from the same type of grape. Legend same as Figure 2. [Right] The difference between each taster’s rating of the more and less expensive cabernet.


Figure 3 shows how well each taster could identify the grape used to make each wine. Although all but one taster scored better than the median accuracy expected by chance, a two-tailed sign test shows that this not clearly reliable (p=0.22). As you might expect, identifying which flowery label description goes with which wine appears to be hopeless (Figure 4). Here only one taster does better than a chance. The labels might as well have been randomly assigned (two-tailed sign test, p=1.00).

Fig 3: dude

Fig 4: Each taster’s proportion of correct matches between the four wines and the three possible grape types. Taster key is the same as in Figure 2.

Fig 4:

Fig 5:Each taster’s proportion of correct matches between each wine and the description of the wine on its label. Taster key is same as in Figure 2.


In general, the subjective preferences of our blind tasters (whom I take to be representative of neuroscientists and neuropsychologists, if not the general population) did not clearly correlate with the wine’s price tag. Moreover, our tasters could not clearly discriminate between the different types of wines and the wine’s descriptions were as good as random. Granted, this is a very small sample of tasters and my analyses don’t attempt to make inferences beyond the four specific wines we tasted. Thus, we can’t conclude from this that there is no correspondence between wine price/type and taste. However, I was struck by the lack of subjective difference between the quality of the Charles Shaw and Stag’s Leap given that their prices spanned an order of magnitude. Is a tendency for such a subjective difference enough to justify spending 10 times as much on a bottle of wine at your next dinner party?

After seeing our results, my answer is “no” and this is backed up by a much larger and careful 2008  study by Goldstein and colleagues who found:

Individuals who are unaware of the price do not derive more enjoyment from more expensive wine. In a sample of more than 6,000 blind tastings, we find that the correlation between price and overall rating is small and negative, suggesting that individuals on average enjoy more expensive wines slightly less. For individuals with wine training, however, we find indications of a positive relationship between price and enjoyment. Our results are robust to the inclusion of individual fixed effects, and are not driven by outliers: when omitting the top and bottom deciles of the price distribution, our qualitative results are strengthened, and the statistical significance is improved further. Our results indicate that both the prices of wines and wine recommendations by experts may be poor guides for non-expert wine consumers.

So find wines you like without emptying your wallet. And if you have guests who are less rational and turn up their noses at a bottle of Two-Buck Chuck because of the price, serve them using a decanter (or beaker). 🙂
-David M. Groppe

P.S. If you’re looking for evidenced based tips on wine, Consumer Reports provides some recommendations. You have to subscribe or go through a library to read them though. However, some general tips (e.g., wine myths vs. reality and food pairings) are available for free. For what it’s worth, Steven Levitt of Freakonomics fame recommends the book The Wine Trials as a buying guide.

Goldstein,R., Almenberg,J., Dreber,A., Emerson,J.W., Herschkowitsch,A., Katz,J. (2008). Do more expensive wines taste better? Evidence from a large sample of blind tasting. Journal of Wine Economics, 3 (1), 1-9 DOI: 10.1017/S1931436100000523

Four things you might not (but should know) about false discovery rate control

•September 7, 2013 • 6 Comments

Massive increases in the amount of data scientists are able to acquire and analyze over the past two decades have driven the development of new statistical tools that can better deal with the challenges of “big data.” One such set of tools is ways of controlling the “false discovery rate” (FDR) in a set of statistical tests. FDR is simply the mean proportion of statistically significant test results that are really false positives.  As you may recall from your introductory statistics course, when you perform multiple statistical tests, the probability of false positive results rapidly increases.  For example, if one were to perform a single test using an alpha level of 5% and there truly is no effect of the factor being tested, there is only a 5% chance of a false positive result.  However, if one were to perform 10 tests using an alpha level of 5% for each test, there is a 40% chance of one or more false positive results (again assuming that the factor being analyzed has no effect). FDR control, like Bonferroni correction, reduces the probability of false positive results by using a more conservative alpha level for each test.  The advantage of FDR over Bonferroni correction is that FDR is generally more powerful (i.e., better at detecting true effects) than Bonferroni.  This stems from the fact that Bonferroni correction is effectively designed to prevent any false positives, but FDR control is designed to prevent a large proportion of false positives and can afford to be less conservative.  In other words, FDR control simply attempts to ensure that the great majority of statistically significant results are accurate but generally lets in a small proportion of false positives in the process.

Because of FDR control’s relatively good statistical power and its ease of application, it has rapidly become commonplace in neuroscience. However, from talking to other researchers, I fear that many people who use FDR control do not understand the following simple, but important facts:

 1. There are multiple FDR control algorithms: Although some papers refer to FDR control as if there is only one procedure for controlling FDR, there are actually several different algorithms with different strengths and weaknesses (for reviews see Farcomeni, 2007; Groppe, Urbach, & Kutas, 2011a; Romano & Shaikh, 2006).  The most popular is the algorithm derived by Benjamini & Hochberg in 1995, which is relatively easy to implement and statistically powerful.  When you use FDR control, be sure to cite which algorithm you’re using.

2. The most popular FDR algorithm is not generally guaranteed to work: Benjamini and Hochberg’s FDR control algorithm (BH-FDR) is only guaranteed to control the false discovery rate when the statistical tests it is applied to are independent or exhibit “positive regression dependency.” When the data being analyzed are normally distributed, positive regression dependency means that none of the variables tested are negatively correlated. Note, that although BH-FDR is not generally guaranteed to work, Clark and Hall (2009) have shown that for normally distributed data BH-FDR will accurately control FDR as the number of tests it is applied to grows. Indeed, several studies using simulated data that violate BH-FDR’s assumptions have shown that in practice, it still works quite well (e.g., Groppe, Urbach, & Kutas, 2011b). That being said, there is at least one FDR control algorithm that is always guaranteed to work. It was derived by Benjamini and Yekutieli (2001), but is not commonly used because it is much more conservative than the BH-FDR procedure.

3. FDR control provides the same degree of assurance as Bonferroni correction that there is indeed some effect: If, after performing an appropriate FDR control procedure, you obtain some statistically significant results, you can be as certain as if you had performed Bonferroni correction that there is indeed at least one true positive in your set of tests.  In other words, if you control FDR at an alpha level of 5% and in truth there is no effect of the factor you’re analyzing at any variable, that means there is only a 5% chance that you will get any erroneously significant results.  Given that FDR control is usually more powerful than Bonferroni correction, this means that FDR control is a much better tool for simply determining if there is an effect or not. However, the disadvantage of FDR control is that because some false positives are allowed, you cannot be certain that any single significant result is accurate.  If it is important to establish the significance of every single test result, a technique like Bonferroni correction or a permutation test that provides “strong control of the family-wise error rate” is necessary.

4. Increasing the number of tests in your analysis can actually produce more significant results: Intuitively, the more individual tests in your set of analyses, the greater the chances of getting a false positive test result and, thus, the more stringently you should correct for multiple comparisons.  This is the way Bonferroni correction works, but it is not necessarily true of FDR control.  Adding more tests into your FDR control procedure can actually make it less conservative if the added tests exhibit an effect (i.e., the added test are likely to have small p values).  This is because when tests are added that are very likely true discoveries, FDR control can let in more false discoveries since it is simply

FDR Example

Table 1: A concrete example of how adding more tests into an FDR control procedure can increase the significance of other tests. The left column shows five p-values from five hypothetical statistical tests. The middle column shows the FDR adjusted p-values (Benjamini & Hochberg, 1995) if only the first four tests are included. The right column shows the FDR adjusted p-values if all five tests are included. * indicates “significant” p-values less than 0.05. Note that the first test result is not significant after FDR correction when four tests are included but is when five are included.

controlling the proportion of significant results that are false (see Table 1 for a concrete example).  Consequently, one should exclude known effects from an analysis when using FDR control as the known effects will make you less certain about the presence of any other effects.

-David Groppe

P.S. If you want to know more about false discovery rate control and other contemporary techniques for correcting for multiple comparisons (e.g., permutation tests). I have a review paper that might be of use.


Benjamini, Y., & Hochberg, Y. (1995). Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal Statistical Society Series B (Methodological), 57(1), 289–300.

Benjamini, Y., & Yekutieli, D. (2001). The control of the false discovery rate in multiple testing under dependency. Annals of statistics, 1165–1188.

Clarke, S., & Hall, P. (2009). Robustness of multiple testing procedures against dependence. Annals of statistics DOI: 10.1214/07-AOS557

Farcomeni, A. (2007). A review of modern multiple hypothesis testing, with particular attention to the false discovery proportion. Statistical Methods in Medical Research DOI: 10.1177/0962280206079046

Groppe, D. M., Urbach, T. P., & Kutas, M. (2011a). Mass univariate analysis of event‐related brain potentials/fields I: A critical tutorial review. Psychophysiology DOI: 10.1111/j.1469-8986.2011.01273.x

Groppe, D. M., Urbach, T. P., & Kutas, M. (2011b). Mass univariate analysis of event‐related brain potentials/fields II: Simulation studies. Psychophysiology DOI: 10.1111/j.1469-8986.2011.01272.x

Romano, J. P., & Shaikh, A. M. (2006). On stepdown control of the false discovery proportion. Institute of Mathematical Statistics Lecture Notes – Monograph Series DOI: 10.1214/074921706000000383

Best neuroscience figure of 2013

•June 23, 2013 • Leave a Comment
"Neurogenesis researchers can stop worrying and love the bomb." -Gerd Kempermann

“Neurogenesis researchers can stop worrying and love the bomb.” -Gerd Kempermann

I know it is only June, but I’m confident that the above figure from Gerd Kempermann’s recent Perspective piece in Science Magazine entitled What the Bomb said about the Brain will be my favorite neuroscience figure of the year.  His article summarizes the findings of a recent publication by Spalding et al. in the journal Cell.  Spalding and colleagues ingeniously exploited the fact that above ground nuclear weapons testing from 1945 to 1963 produced a peak in carbon isotope 14 levels.  Some of these isotopes made their way into our bodies and can be used to date the age of cells (in this case neurons).  In brief, their findings confirmed that adult neurogenesis is limited to the hippocampus in humans (especially in the dentate gyrus).  Moreover, neurogenesis appears to be limited to a subpopulation of hippocampal neurons that turn over at a rate of 1.75% a year, while the other population of hippocampal neurons die and are never replaced.

-David Groppe

NINDS’s Curing the Epilepsies 2013 conference presentations now online

•June 18, 2013 • Leave a Comment

ImageA few weeks ago I had the great fortune to attend the National Institute of Neurological Disorders and Stroke’s Curing the Epilepsies conference at the National Institutes of Health’s campus in Bethesda.  The purpose of the meeting was to bring epilepsy researchers, industry representatives, and advocates together to evaluate progress in epilepsy research over the past five years and to identify research priorities in the immediate future.  This is the third such conference (the earlier two occurring in 2000 and 2007) and it was my first experience seeing any branch of the National Institutes of Health in action.  It was inspiring to see how NINDS was able to bring together such a diverse group of scientists, clinicians, engineers, administrators, and advocates together in a truly cooperative, collaborative environment.  I’ve been in science long enough now to know how long it takes for progress to be made (let alone for discoveries in the lab to make it to patients), but I left Curing the Epilepsies with a renewed sense of optimism that we will indeed come up with new ways to significantly improve the lives of people suffering from epilepsy in the not too distant future.

Fortunately, you don’t have to take my word on this; you can watch most of the conference proceedings yourself online courtesy of NINDS.  There were three days of presentations:

and the schedule of speakers is available here.  Unfortunately there is only a single video for each day. So you have to look at the schedule and advance the video via trial and error to find a particular speaker.  All the speakers were worth watching, but other ECoG researchers out there might be particularly interested in Eddie Chang’s talk on the state and future of the surgical treatment of epilepsy, Brian Litt’s presentation on emerging therapeutic epilepsy technologies, and Ruben Kuzniecky’s talk on improving localization of the seizure focus.

-David Groppe