My workflow towards the start of 2018

I’ve often found it helpful to read about how others organize and attempt to optimize their time. Here are a few of the things that I’ve been doing recently on that front, in case anyone else finds them helpful:

  • Vitamin D 5000 IU daily. As far as I can tell the jury is still mixed on vitamin D, but to me it seems worth the cost/benefit analysis.
  • A daily scoop of Metamucil with water in the morning. A colorectal fellow at my school got me hooked on this, and I’ve found it has added a lot of value to my life. My sister doesn’t like the artificial flavoring ingredients, and I respect her point, but for me I think great is the enemy of good.
  • Bright light in the AM. Especially helpful for East Coast winters. I’ve been doing it for 2 years, but more consistently recently.
  • Smartphone on greyscale mode 24/7. I’ve been doing this for several months now, and I think it has helped to mitigate some of my phone addiction. It especially makes photo sharing sites like FB and instagram a lot less interesting.
  • Charging my smartphone in the living room at night. I’ve done this for a month or two as well. Beyond the probably helpful but more nebulous effects on sleep hygiene, this definitely decreases the probability of getting a stressful email/text right before bed or in the middle of night when I wake up briefly.
  • Blocking FB and Twitter on web and mobile. I’ve been doing this for the past month or two as a mental health break from these websites, and I’ve found it calming. I occasionally break the lock to look up a particular thing, but I always try to add it back right away. Just adding barriers is helpful for me when I have the “itch” to check something that’s likely to make me unhappy in the long run.
  • Blue blocking orange glasses at night. I’ve been doing this for about 4.5 years. While I think it probably helps my sleep by preventing inhibition of melatonin production, the most obvious effect is that it helps decrease eye dryness.
  • Caffeine pills instead of coffee. The first thing I do on most days is to take a 100 mg caffeine pill. I like coffee and drink it occasionally, but for daily use I find caffeine pills cheaper, more convenient, and much easier to dose. The precise dosing is key for me because I’m very sensitive to caffeine withdrawal and can end up with withdrawal headaches just from drinking a particularly large cup of coffee one day.

Making a shiny app to visualize brain cell type gene expression

Attention conservation notice: A post-mortem of a small side project that is probably not interesting to you unless you’re interested in molecular neuroscience.


This weekend I put together an R/Shiny app to visualize brain cell type gene expression patterns from 5 different public data sets. Here it is. Putting together a Shiny application turned out to be way easier than expected — I had something public within 3 hours, and most of the rest of my time on the project (for a total of ~ 10 hours?) was spent on cleaning the data on the back end to get it into a presentable format for the website.

What is the actual project? The goal is to visualize gene expression in different brain cell types. This is important because many disease-relevant genes are only expressed in one brain cell type but not others, and figuring this out can be critical to learning about the etiology of that disease.

There’s already a widely-used web app that does this for two data sets, but since this data is pretty noisy and there are subtle but important differences in the data collection processes, I figured that it’d be helpful to allow people to quickly query other data sets as well.

As an example, the gene that causes Huntington’s disease has the symbol HTT. (I say “cause” because variability in the number of repeat regions in this gene correlate almost perfectly with the risk of Huntington’s disease development and disease onset.) People usually discuss neurons when it comes to Huntington’s disease, and while this might be pathologically valid, by analyzing the data sets I’ve assembled you can see that this gene is expressed across a large number of brain cell types. This raises the question of why — and/or if — variation in its number of repeats only causes pathology in neurons.

Screen Shot 2016-06-13 at 11.35.10 AM

Here’s another link to the web app. If you get a chance to check it out, please let me know if you encounter are any problems, and please share if you find it helpful.

References

Aziz NA, Jurgens CK, Landwehrmeyer GB, et al, et al. Normal and mutant HTT interact to affect clinical severity and progression in Huntington disease. Neurology. 2009;73(16):1280-5.

Huang B, Wei W, Wang G, et al. Mutant huntingtin downregulates myelin regulatory factor-mediated myelin gene expression and affects mature oligodendrocytes. Neuron. 2015;85(6):1212-26.

Eight years of tracking my life statistics

Attention conservation notice: Borderline obsessive navel-gazing.


Most mornings, I start my day — after I lie in bed for a few minutes willing my eyes to open — by opening up a Google spreadsheet and filling in some data about how I spent that night and the previous day. I’ve been doing this for about eight years now and it’s awesome.

I decided to post about it now because self-tracking as a phenomenon seems to be trending down a bit. Take for example former WIRED editor Chris Anderson’s widely shared tweet:

Screen Shot 2016-05-07 at 6.07.48 PM

So this seems a good time to reflect upon the time I’ve spent self-tracking so far and whether I’m finding it useful.

But first, a Chesterton’s fence exercise: why did I start self-tracking? Although it’s hard to say for sure, here’s my current narrative:

  • When I was a senior in high school, I remember sitting in the library and wishing that I had extensive data on how I had spent my time in my life so far. That way when I died, I could at least make this data available so that people could learn from my experiences and not make the same mistakes that I did. I tried making a Word document to start doing this, but ultimately I gave up because — as was a common theme in my misspent youth — I became frustrated with myself for not having already started it and decided it was too late. (I hadn’t yet learned about future weaponry.)
  • I used to read the late Seth Roberts’ blog — it was one of my favorites for a time — and he once wrote a throwaway line about how he had access to 10 years of sleep data on himself that he could use to figure out the cause of his current sleep problems. When I read that early in college I thought to myself “I want that.”
  • In sophomore year of college my teacher and mentor Mark Cleaveland assigned me (as a part of a class I was taking) to write down my sleep and how I spent my time in various activities for a week. This was the major kick into action that I needed — after this, I started tracking my time every morning on the spreadsheet.

It takes about 66 days to develop a habit. The more complex the habit, the longer it takes. I think that by about 100-150 days in it was pretty ingrained in me that this was just something that I do every morning. After that, it didn’t take much effort. It certainly did take time though — about 3-5 minutes depending on how much detail I write. That’s the main opportunity cost.

Three of the categories I’ve tracked pretty consistently are sleep, exercise, and time spent working.

Here’s hours spent in bed (i.e., not necessarily “asleep”):

Screen Shot 2016-05-07 at 8.54.22 PM

black dots = data points from each day; red line = 20-day moving average

Somewhat shockingly, the mean number of hours I’ve spent in bed the last 8 years is 7.99 and the median is exactly 8.

Here’s exercise:

Screen Shot 2016-05-07 at 8.58.39 PM.png

I’m becoming a bit of a sloth! Hopefully I’ll be able to get this back up over the next few years. Although note that I have no exercise data for a few months in Summer ’15 because I thought that I would switch solely to Fitbit exercise data. I then got worried about vendor lock-in and started tracking manually again.

Here’s time spent working (including conventional and non-conventional work such as blogging):

Screen Shot 2016-05-07 at 8.49.54 PM

One of the other things I’ve been tracking over the past few years is my stress, on an arbitrary 1-10 scale. Here’s that data:

Screen Shot 2016-05-07 at 9.12.26 PM

In general, my PhD years have been much less stressful than my time studying for med school classes and Step 1. Although it’s not perfect, I’ve found this stress level data particularly valuable. That’s because every now and then I get stressed for some reason, and it’s nice to be able to see that my stress has peaked before and has always returned to reasonably low levels eventually. I think of this as a way to get some graphical perspective on the world.

I track a few other things, including time spent on administrative tasks (like laundry), time spent leisure reading, time spent watching movies, and time spent socializing.

I also track some things that are too raw to write about publicly. Not because I’m embarrassed to share them now, but because I’m worried that writing them in public will kill my motivation. This is definitely something to consider when it comes to self-tracking. For me, my goal has first and foremost been about self-reflection and honesty with myself. If I can eventually also share some of that with the world, then more’s the better.

Overall, I’ve found three main benefits to self-tracking:

  1. Every now and then, I’ll try to measure whether a particular lifestyle intervention is helping me or not. For example, a couple of months months ago I found that there was a good correlation between taking caffeine (+ L-theanine) pills and hours worked. Although this is subject to huge selection bias, I still found it to be an interesting effect and I think it has helped me optimize my caffeine use, which I currently cycle on and off of.
  2. There have been a few times these past 8 years when I’ve suddenly felt like I’ve done “nothing” in several months. One time this happened was about a year into my postbac doing science research at the NIH when it seemed like nothing was working, and it was pretty brutal. That time and others, it’s been valuable for me to look back and see that, even if I haven’t gotten many tangible results, I have been trying and putting in hours worked. Especially in science where so many experiments fail, it’s helpful for me to be able to measure progress in terms of ideas tried rather than papers published or some other metric that is much less in my control. GitHub commits could also work in this capacity for programmers, although that’s slightly less general.
  3. The main benefit, though, has not been my ability to review the data, but rather as a system for incentivizing me to build process-based habits that will help me achieve my goals. I enjoy the bursts of dopamine I get when I’m able to write that I worked hard or exercised the previous day — or that I got a lot of high-quality socializing in with friends or family — and it makes me want to do that again in the future.

Do you want to try a similar thing? Check out this blank Google spreadsheet for a quick start; it has a bunch of possible categories and a few example days for you to delete when you copy it over to your own private sheet. I like Google sheets because they are free and able to be accessed anywhere with an internet connection, but it’s certainly not a requirement.

Even if you don’t try it, thanks for reading this essay and I hope you got something out of it.

 

How to version control Microsoft Word documents among collaborators

Abstract: Just a quick little tool I use to version control my drafts; highly, highly unlikely to be of use unless you a) use git and want to regularly back up the text of your Word document and b) are working with collaborators who do not use git.


In an attempt to write more, I’m trying to remove barriers. One barrier is worry that if I delete or change something, I’ll want it back, which keeps me from making progress. Version control with git solves this nicely for code and simple text documents, but bulky Word documents do not play nicely with git.

Enter pandoc. This tool converts Word documents to the plain-text document formatting style Markdown. Installing it on Mac OSX 10.10 is pretty easy.

But what if you have a Word document in one folder but you want your Github repo in another folder? (Say, your Word doc is in Dropbox to share with collaborators or among computers easily, but you follow the recommendation to not use Github with Dropbox.) Enter this simple shell script for automating the push to Github:

cp Path_To_Paper/Paper.docx . 
pandoc -s Paper.docx -t markdown -o Paper.md 
git add . 
git commit -m "$1" 
git push origin master

Call the shell script within your github repo, followed by your commit message. Note that this is smoother if your git config file is set-up such that Github won’t ask for your username and password every time you push.

Notes on segmenting oligodendrocytes in electron microscopy images using Python3 and OpenCV 3.0

Attention Conservation Notice: Not much new here; mostly just notes to myself for future reference. Reading this is unlikely to be a good use of your time, unless you are trying to install OpenCV 3.0 on Python3, in which case, I tremble with sympathy. (In all seriousness, installing it took me about an hour, but I made some trivial mistakes and it could be way quicker.)


Installing OpenCV for Python3

First of all, know that downloading and installing OpenCV (the CV stands for “Computer Vision”) for Python on a MacBook Pro can be pretty time consuming; one commenter I saw online called it “a rite of passage.” After failing to successfully download it for Python 2.7 for about half hour, I eventually decide to use this opportunity to upgrade to Python3.

Installing Python3 is easy:

brew install python3

I then followed Luis González’s useful tutorial for installing OpenCV as a module for Python3. I still ran into one problem, which is that I had a different version of Python. My recommendation here is to manually cd to those directories in your terminal to make sure that the files exist as they are meant to, and then copy and paste the paths into your Cmake GUI. Here is my successful configuration:

Cmake

Specifically, if you are getting the error

fatal error: 
 'Python.h' file not found
#include <Python.h>

(perhaps at 98% completion!), make sure to check that your $PYTHON3_INCLUDE_DIR is set properly; mine had a typo.

How to identify oligodendrocytes in electron microscopy images 

Once I had OpenCV downloaded, I was able to begin actually analyzing some EM images. I’m interested in being able to distinguish oligodendrocytes on electron microscopy from other brain cell types. It turns out this is a pretty difficult problem, and as far as I know, there is no large database of images from which a substantial training set (e.g., > ~ 200 images or so) could be built.

Instead, we can go based on published features of oligodendrocytes, which have been described previously (e.g., herehere, here, here, and here). As far as my novice understanding goes, these are some key features of oligodendrocytes:

  • Smooth, round-to-oval outline
  • Centrally placed, round-to-oval nuclei, (with “nucleoli occasionally seen on the plane of section”)
  • Distinct and dark Golgi apparatus in the cytoplasm
  • Thin processes (when they are visible)

And here are some features distinguishing oligodendrocytes from other cell types:

  • Astrocytes: a) paler (cytosplasm and nuclei), b) glycogen granules, c) filament bundles (due to GFAP), and d) wider processes (if the processes can be identified)
  • Neurons: large and pale nuclei
  • Microglia: more likely to be irregularly shaped, and sometimes have dark, small inclusions
  • NG2 cells: elongated or bean-shaped nuclei, and can contain long endoplasmic reticulum

Here is a picture of an oligodendrocyte from Alan Peter’s helpful website:

Alan Peter's oligodendrocyte EM

Using OpenCV to parse tissue slices 

Here is my code. I tried a few different strategies. My goal is to parse slice out a portion of the image that is recognizable as “the oligodendrocyte,” which can be seen by the human eye as the red portion here (also from Alan Peter’s website):

Peters Colors

1) Canny edge detection. This seems to be “too much” to be useful.

Canny

2) Otsu’s reduction of a greyscale to a binary image. This is actually not bad; maybe it could be stacked with another method?

Otsu

3) Blob detection. This is somewhat promising, but unfortunately the blobs detected are pretty small (too small to be the oligodendrocyte, or even the nucleus), and when I try to make them larger, no blobs are detected.

blobs

So, this is still a work-in-progress, but I wanted to get some notes up.

Reference

The fine structure of the aging brain. Authors: Alan Peters and Claire Folger Sethares. Boston University School of Medicine, 72 East Newton Street, Boston, MA 02118. Website: www.bu.edu/agingbrain. Supported by the Institute on Aging of the National Institute of Health, grant number P 01-AG 000001.

How to run kallisto on NCBI SRA RNA-Seq data for differential expression using the mac terminal

Attention Conservation Notice: This post explains how to run the exceptionally fast RNA-seq k-mer aligner kallisto from the Pachter lab on data you download from NCBI’s Short Read Archive, and then analyze it for differential expression using voom/limma. As with everything in bioinformatics, this will likely be obsolete in months, if not weeks.


Kallisto is really fast and non-memory intensive, without sacrificing accuracy (at least, according to their paper), and therefore has the potential to make your life a lot easier when it comes to analyzing RNA-seq data.

As a test data set, I used the very useful SRA DNA Nexus to search the SRA database for a transcriptome study from human samples with 2-3 biological replicates in 2 groups, so that I could achieve more robust differential expression calls without having to download too much data.

I ended up using SRP006900, which is titled “RNA-seq of two paired normal and cancer tissues in two stage III colorectal cancer patients.” I actually wasn’t able to find a study describing it for differential expression analysis in a 3-5 min search, but since it was public almost four years ago, I’m sure that it has been analyzed in such a manner. It is single-end reads with an average of 65 base pairs per read.

In order to download this data set (which totals ~ 10 GB over its four files, in .fastq format) from the SRA via the command line, I used the SRA toolkit. You’ll want to make sure this is downloaded correctly by running a test command such as the following on the command line:

$PATH/sratoolkit.2.5.0-mac64/bin/fastq-dump -X 5 -Z SRR390728 

You’ll also want to download kallisto. In order to install it, I copied it to my usr/local/bin/ folder to make it globally executable, as described on the kallisto website.

You also need to download the fasta file for your transcriptome of interest and create an index file that kallisto can work on with. You can download the hg38 reference transcriptome from the kallisto website or from UCSC.

For pre-reqs, it is helpful to have homebrew installed as a package manager. If you do (or once you do), these three commands did the trick for me:

brew update
brew install cmake
brew install homebrew/science/hdf5

Next you’ll need to actually download the data from each run, convert them to .fastq, and then run kallisto on them. For extensibility, I did this in the following shell script.

First, a config file with the sample names and average read length (will be needed for kallisto if the data is single-end, in my experience):

declare -a arr=("SRR222175" "SRR222176" "SRR222177" "SRR222178")
AVG_READ_LENGTH=65

(Update 5/13/15: The average read length should be average fragment length, which I’m working on figuring out the best way of estimating from single-end reads (if one is possible). In the meantime, it may be easier to choose a paired-end read sample from SRA instead, from which the average fragment length can be estimated by kallisto. Thanks to Hani Goodarzi for the helpful pointer that I was using the wrong parameter.)

Next, I use a shell script to execute the relevant commands on these sample names to build your abundance.txt files. Note that this data set has single-end reads, so you only call one .fastq file, rather than the two in their example. I put each output in a file called output_$RUNNAME that I can call from R for downstream analyses. Here’s the full shell script (also github):

#path to where the SRA bin folder is located; could eliminate by making SRAT globally executable 
PATH_SRA_DIR="your_path_here/"

#go to where your kallisto test directory is found, if necessary
echo "cd your_path_here"

#load the config file
source test_kallisto/config_SRP006900.file

#download the data 
for i in "${arr[@]}"
do
   "$PATH_SRA_DIR"sratoolkit.2.5.0-mac64/bin/fastq-dump --accession $i --outdir test_kallisto
done

#create the intranscripts index file 
kallisto index -i transcripts.idx transcripts.fasta.gz

#create output directories, and then run kallisto for each of the runs
for i in "${arr[@]}"
do
	mkdir output_"$i"
	kallisto quant -i transcripts.idx -o output_"$i" -l "$AVG_READ_LENGTH" "$i"".fastq"
done

Once you have these files, you can analyze them in R using limma. Here’s how I went about it. First, read in the files and merge the transcripts per million (tpm) calls into one data frame:

library(limma)

#samples names of downloaded from SRA 
sample_list_n = c("SRR222175", "SRR222176", "SRR222177", "SRR222178")

for(i in 1:length(sample_list_n)){
	tmp = read.table(file = paste0("test_kallisto/output_", 
		sample_list_n[i], "/abundance.txt"), header = TRUE) 
	assign(sample_list_n[i], tmp)
}

sample_list = mget(sample_list_n)\

#give the list unique names 
sample_list_uni = Map(function(x, i) setNames(x, ifelse(names(x) %in% "target_id",
      names(x), sprintf('%s.%d', names(x), i))), sample_list, seq_along(sample_list))

full_kalli = Reduce(function(...) merge(..., by = "target_id", all=T), sample_list_uni)

tpm_vals = full_kalli[, grep("tpm", names(full_kalli))]
rownames(tpm_vals) = full_kalli$target_id

Then make a contrasts matrix for normal vs control samples:

groups = c("normal", "colon_cancer", "normal", "colon_cancer")
condition &lt;- model.matrix(~0 + groups)
colnames(condition) = c("normal", "colon_cancer")
cont_matrix = makeContrasts(norm_canc = normal - colon_cancer, levels = condition)

Then compute differential expression using that contrast matrix:

#this spits out an EList object 
v = voom(counts = tpm_vals, design = condition)
fit = lmFit(v, condition)
fit = contrasts.fit(fit, cont_matrix)
fit = eBayes(fit)
top_table = topTable(fit, n = 10000, sort.by = "p")

And that’s it. Interestingly, the transcript called as the #1 most differentially expressed using this method, ATP5A1, has been previously implicated as a marker for colon cancer.

Update 5/13/15: Fixed two typos, switched one link, and switched “genome” to “transcriptome” (I was sloppy previously) per @pmelsted’s useful comment on twitter.

Update 8/10/15: As Seb Battaglia points out on twitter, “voom assumes you cut out out 0s and low intensity tpm. I’d add that step or data hist() will be skewed.” I don’t have the opportunity to re-run the analyses right now (!), but please take this into consideration.

Review of Kandel’s Principles

After its release was pushed back many times, the 5th edition finally came out within the past year, and I have been reading it on my laptop’s Kindle app throughout my first year of core PhD courses and in preparing for my qualifying exam. I had read some of the 4th edition during undergrad, as well.

First, I discuss the broad negatives and positives, and then I present some eminently skimmable chapter-by-chapter notes. In general my feelings towards the book are warm, and I do expect that if you read the textbook, dear reader of this review, you will learn a lot from it.

Positives

1) Does a good job of not trying to be Alberts’ *Molecular Biology of the Cell*. The sections on cell biology, the central dogma, and non-neuroscience-related signaling pathways are refreshingly bare-boned. Seek resources elsewhere if you want to go ham on transcription, translation, and the MAPKKK-MAPKK-MAPK cascade.

2) Perception, sensation, and movement were not the reasons that I first became interested in neuroscience, and, generalizing from my one example as is de rigueur in book reviews, I think that is true of most students. And while this might be just Stockholm syndrome, I’m actually quite happy that there is so much detail and care put into these sections which make up around 1/3rd of the text. These fields are way more tractable to study than the sexy emotion, learning, and personal identity, yet the most of the principles that have been discovered there are likely to generalize.

As an example of this, consider the work of Charles Sherrington, who among other accomplishments won the 1932 Nobel for explaining spinal reflexes as a balance of excitation and inhibition. And now that we have some fancy techniques like conditional genetic KOs and optogenetics, we know that a variety of other phenomena, from critical periods to anxiety, are also regulated via a very similar balance of excitation and inhibition.

3) Most chapters do an excellent job of motivating their material. For example, they emphasize themes from the history of how people have thought about the brain, e.g. James and Freud. There are also a few references to art and literature, such as Gabriel Garcia-Marquez, that are really money.

4) Most fundamentally, this is the eminent textbook on how your mind works and how you are able to understand the words that you are currently reading. And there are some chapters, especially the last three (65 – 67), that really delve into this. What’s there not to love?

Negatives

1) In general neuroscience tries very hard to distinguish itself from psychology and this makes good sense in terms of specialization. But the field is still operating in the wake of Karl Lashley, a famous experimentalist who in the 1930s concluded that brain regions had “equipotentiality” for learning mazes not because his lesions were flawed but because his tasks were not specific enough. Designing behavioral tasks is not trivial. Yet, you will not read much about the principles behind how to do so, and nothing about the matching law or Rescorla-Wagner. (My bias: I did some research in learning and behavior in undergrad.)

2) For one of our classes we read an older (3rd edition) version of Chapter 13 on Neurotransmitters. There were way more equations explaining different models of neurotransmitter vesicle release patterns, e.g. explaining the use of the Poisson distribution as an approximation for the binomial. It doesn’t make sense that the text has become less quantitative at the same time that math has become easier to use to explain phenomena, as a result of advances in systems biology and just programming generally.

3) Why does searching for “optogenetics” yield me zero results?

4) I prefer my pedagogical material to be structured in the format of *example 1*, *example 2*, (*optional example 3*), and *inducted principle*. The examples only matter insofar as they motivate the principles. Kandel’s textbook strays slightly too far from this, I think. In particular, the text tends to enshrine the examples, such CREB, CamKII, PKA, and the ilk, as worthy of our worship in and of themselves. This sets the trend for how neuroscience courses should be taught and for that reason it is a bit troubling.

Chapters

1) “The Brain and Behavior”. A nice historical context, in which phrenologists continue to get no love.

2) “Nerve Cells, Neural Circuitry, and Behavior.” Explains the basic functions of neurons and some about how different types vary. Tells us that glia cells might be as heterogenous as neurons. Au contraire, transcriptomics approaches (e.g., Cahoy et al, 2008, PMID: 18171944) suggest that oligodendrocytes and astrocytes as classes are each as heterogenous as neurons, making them moreso when combined. Look, no one’s arguing here that the neuron isn’t the fundamental unit of cognition, but in the 6th edition, all three major glia types (i.e., OLs, astrocytes, and microglia) need more attention. Ependymal cells, on the other hand, can continue to be safely ignored until further notice.

3) “Genes and Behavior”. Good chapter. The section on multigenic traits at the end is very nicely done, summarizing the debate that has been raging over the past decade for each disease about whether there are rare variants with large effects or common variants with small effects. Also does a good job of explaining the concordance of psychiatric diseases among relatives (e.g., fraternal vs identical twins) and what it implies.

4) “The Cells of the Nervous System.” This chapter tells us that there are about 100 types of neurons, but doesn’t a) tell us why or b) point to a list that actually enumerates them. Also, it continues to propagate the neuron-centric bias.

5) “Ion Channels.” Nice chapter on electrophysiology, and does a particularly good job of referencing future chapters and presenting examples as stepping stones for more general principles. One unsolicited suggestion: the fact that Na, Mg, K, and Ca are the major cations used seems arbitrary — it’s like, what’s up with that, nature? But once you look at a periodic table this becomes way less arbitrary, because they’re all right next to one another. So, I suggest that they include a periodic table as a sub-figure in the 6th Ed.

6) “Membrane Potential.” Lots of vocabulary; a necessary chapter but not going to blow anyone away. Assumes you know some basic chemistry, like equilibrium vs steady state.

7) “The Action Potential.” Gets the history right, e.g. correctly emphasizing Cole and Curtis’s classic experiment correlating the AP to an increase in membrane conductance. Explains the key concept of positive feedback in voltage-gated sodium channel opening well. Could have used more I-V plots.

8) “Synaptic Transmission.” Table 8.1 has a nice comparison of gap junctions vs synapses. More information in the book (e.g., different types of ion channels) should be summarized in this format.

9) “Nerve-Muscle Synapse.” Could have had slightly more on what the actually muscle does with the potential generated at the NMJ. Also, I would have liked a summary table comparing NMJ to CNS synapse properties, to sort out the differences and similarities.

10) “Synaptic Integration.” One of the best chapters, with three money figures: a) Figure 10-4 is a nice way of showing the differences between AMPA and NMDA receptors, b) Figure 10-11 shows nicely how EPSPs and IPSPs can sum linearly, an example of how neurons integrate signals, and c) Figure 10-14 shows why time and length constants matter. It explains the concept of synaptic democracy but doesn’t actually say the term — why not? More people need to be thinking about synaptic democracy.

11) “Modulation of Synaptic Transmission.” Kind of a boring chapter — not enough induction from examples to principles for me. I s’pose some of these pathways, like GPCRs and RTKs are just essential to know. Small aside, a couple of typos in the Kindle edition where beta is mistranscribed as alpha — I saw it twice so it probably happened in an automated process.

12) “Transmitter Release.” Another really nice chapter, describing very fundamental processes at the pre-synaptic terminal. Cooperativity of calcium binding to synaptotagmin is one of the most interesting concepts in the book, reminiscent of the oxygen-hemoglobin binding curve in physiology. Could have expanded upon it further.

13) “Neurotransmitters.” Too much focus on the chemistry and the names, not enough focus on the techniques, diversity of neurotransmitters, and history. Also, the four criteria for a molecule to be considered an NT doesn’t seem all that canonical, given that others sources seem to differ. Granted, they do discuss the imprecision of the definition. Finally, astrocytes could have been discussed more w/r/t NT reuptake and recycling.

14) “Diseases of the Nerve and Motor Unit.” Clinical chapter, almost seems out of place given the basic focus of most of the rest of the nearby chapters. Not very conceptual.

15) “CNS Organization.” Lots of anatomy. Why do they say that there are five special senses (touch, vision, hearing, taste, smell)? They can’t mean the anatomical definition because that would exclude touch, but they also can’t mean all the senses because clearly there are more, like proprioception and vestibular senses. In chapter 21 the authors are much more sensible about this.

16) “Organization of Perception and Movement.” Basic overview of sensory and motor systems.

17) “Internal Representations.” Lots about sensory maps and their plasticity, which is well explained. Then some about consciousness, which is just so hard to talk about precisely. A nice effort, though.

18) “Organization of Cognition”. A fairly comprehensive chapter and a lot of the material on cortical organization is essential. In general I found the discussion of the lesion studies to be somewhat wanting insofar as they were too qualitative; i.e., we are told that a “lesion” in X region leads to Y consequence, without a discussion of the probability A that this will occur, the dependence on the size of the lesion B, and/or the other possible consequences C and D.

19) “Cognitive Premotor Systems.” Fairly high-level overview of motor planning regions, which overall seems well put together. One quibble: although it is an attractive hypothesis, in my view mirror neurons in F5 of the inferior frontal gyrus probably do not mediate action understanding; for more on this, see Greg Hickok’s 2009 critique.

20) “Functional Imaging.” Basic intro to fMRI, PET, and DTI and some relevant results.

21) “Sensory Coding.” A nice, quantitative overview of sensory systems. Also contains some subversive writing that demands certain tasks of you as a reader, which makes the experience much more enriching. Recommended.

22) “Somatosensory System.” Figure 22-4 has nice models of possible ways that mechanoreceptors could work. Lateral inhibition is a very basic concept and it could have been explained in a diagram of its own.

23) “Touch.” Two-point discrimination is explained well and is a cool concept. There is maybe slightly too much detail about S1 subregions than is necessary, especially seeing as the principles themselves (e.g., specificity, bottom-up processing) were not greatly enriched from these details.

24) “Pain.” Nice chapter, with cool concepts like gate theory. Delves into a channel level understanding and neuropeptides well. One nit is that referred pain is an important phenomenon and deserved more attention, in my view.

25) “Visual Processing Summary.” Probably most people could just read this (and not the subsequent four chapters) and know more than visual neuroscience than they ever realistically intended to.

26) “The Retina.” I would have liked to see this chapter discuss artificial retinas, or even better, frame the discussion in terms of how you would make one. Figure 26-13B is cool and looking at it is a nice way to check whether you understand the concept of a receptive field at a low enough level.

27) “Intermediate Visual Processing.” Classic visual system material, including explanations of illusions and what they tell us about how the system is working. Hardcore fans only.

28) “High-Level Visual Processing.” Section on how sensory experience of an object in a visual field (more of a bottom-up process) and recall of that same image (more of a top-down process) rely on similar representations in the inferior temporal cortex is worth the price of admission.

29) “Visual Processing and Action.” One key insight from this chapter is that the motor system likely sends a copy of its planned movement to the parietal system so that the visual field can remain constant despite the ubiquitous presence of saccades.

30) “The Inner Ear.” Nicely explained chapter with good diagrams. I liked that they explained how hearing aids worked — it was a good way to apply the principles one learns earlier in the chapter.

31) “Auditory CNS.” I focused on the sections related to the functions of the MSO and LSO, and these were nice. Also, the section on echolocation in insectivorous bats is so cool.

32) “Smell and Taste.” Both brief, mainly focusing on the receptor level. In my view, once you’ve read about one GPCR taste receptor type, you’ve kind of read about them all. Miraculin, the protein responsible for flavor tripping, should have gotten some play when discussing the sweet receptor.

33) “Movement Organization.” I liked the section of speed vs accuracy trade-offs as described by Fitt’s law. Also the section on how prediction compensates for delayed sensorimotor feedback is interesting and seems relevant to sports.

34) “The Motor Unit.” Figure 34-14 is a very nice example of agonist-antagonist action at a joint, maybe the best I’ve seen. The section on the sarcomere brought back bad memories; is it really necessary in a neuro text?

35) “Spinal Reflexes.” Pretty technical chapter — these are difficult concepts which require that you expend some APs in thinking about them. Nicely explains the concept of the muscle spindle, the 1a fibers that innervate it, and why you need to couple a-MN activation to g-MN activation to sustain spindle tension.

36) “Locomotion.” Hammers in the idea that complex motor patterns are a) programmed at a high level and b) involve alternating contraction of flexors and extensors. Not sure it needed its own chapter, but locomotion itself is a fascinating phenomena and obviously one that has vexed philosophers throughout the ages.

37) “Motor Cortex.” The topographically organized motor maps are probably the most interesting part of this chapter.

38) “Parietal and Premotor Cortex.” The text at the beginning about having goals, maintaining them, developing possible strategies to achieve them, and then executing actions to do carry out one (or more) of those strategies, were awesome and very pro-rationality. I felt a sense of kindred-spiritedness with the authors. Box 38-1 was a bit vague, and included the words “emergent dynamic”, which should just not be put next to one another, ever. Also, some of the descriptions of experiments done to determine findings seemed too long. Finally, the talk of mirror neurons did not appear to be complementary to that discussed in Chapter 19.

39) “Control of Gaze.” Figure 39-3 is a beautiful explanation of a phenomenon I spent many, many hours puzzled about when I was much younger, and it ought to be included in every elementary school as a part of the “This is How Your Body Works” class. All in all I think this section is explained more clearly (although it might take longer) than in Drake’s *Gray’s Anatomy*.

40) “Vestibular System.” Kind of a clinical chapter. Section showing the cerebellar feedback onto the vestibular-ocular reflex, and how this needs to be dynamic in the presence of eyeglasses, for example, was particularly well-done.

41) “Posture.” Short-ish chapter that makes some useful points about body sway and the maintenance of center of mass. Not very conceptual, though.

42) “Cerebellum.” Solid chapter, with a good mix of anatomical and cellular facts and speculations, noted as such, about why things might be set-up that way. The repeated motif in the cerebellum of parallel fiber and climbing fiber innervating Purkinje cells is deeply instilled in the canon of neuroscience.

43) “Basal Ganglia.” The “rate-model” of direct and indirect BG function is highly influential, so I think the authors should have spent more time, and maybe a couple of diagrams, explaining precisely what is better about the alternative models.

44) “Genetics of Neurodegeneration.” Clinical chapter. A sobering reminder that even though we can know much about a disease, like we do for Huntington’s, that alone won’t necessarily translate to acceptable treatments.

45) “Direct Brain Stem Functions.” The clinical nerve discussion is mostly of interest clinically. Melanospin-containing RGCs deserve more love, they are fascinating and important to the everyman w/r/t minimizing blue-shifted light at night.

46) “Modulatory Brain Stem Functions.” Discusses a wide breadth of approaches to the brain stem including connectivity, ion channels, imaging, and clinical relevance. Maybe slightly too ambitious.

47) “The Autonomic NS.” In general this is an important topic and probably should be supplemented by material elsewhere if you are at all interested in physiology. The stuff on LHRH was interesting as a model system for neuropeptides. I might have liked to see more on the enteric nervous system.

48) “Emotions.” I’m surprised this wasn’t a longer chapter given all the work that has been done on the amygdala over the past decade. Figure 48-6A/B is useful as a model for how conditioning changes single cell activity.

49) “Homeostasis and Motivation.” I was kind of bored by the osmolarity sensing pathway, in part because I had already learned a lot of it in physiology. The leptin, agouti-related peptide, ghrelin, and cholecytsokinin pathways are very important to health today — we all can hope they will be drawn upon to develop some good therapies for obesity. Addiction related stuff was dopamine-heavy but explained the concepts well.

50) “Seizures.” Pretty clinical chapter, although an atypically conceptual one; also useful as a mini-intro to EEG.

51) “Sleep.” We spend so much of our time sleeping and know so little about it that reading this type of material is always interesting. Trigger warning, I do not recommend reading it before you yourself go to sleep, because reading about the disorders might prime you to see them in yourself. Fig 51-2 could profitably be memorized. I would have liked to see a figure describing the relationships of the transcriptional oscillators — text is too confusing for those kinds of loopy relationships.

52) “Patterning the CNS.” I’ve always found studying neurodevelopment to be mostly about memorizing the alphabet soup — Hox, Wnt, BMP, etc. At least the images are colorful. Figure 52-17 is a cool example of plasticity; it shows how A1 can be remapped to look like V1 if the MGN gets input from the retina instead of the inferior colliculus.

53) “Neuron Growth.” I found the material in the second half of the chapter, on NT plasticity and neurotrophin signaling, to be way more interesting and relevant than that of the first half. To be fair, Figure 53-7 is really cool and shows why nuclear translocation is important in radial migration and why certain related mutations lead to defects in neurodevelopment.

54) “Axon Growth.” Some classic neuroscience topics here, like Sherry’s frog eye inversion experiment, growth cones, and midline crossing. For the last topic, the robo3 vs robo1/2 expression is a useful but somewhat challenging concept and deserved its own part of a figure.

55) “Synapse Formation.” A basic and essential neuro chapter, although it focused maybe slightly too much on the NMJ, which might be different from some of the CNS synapses most readers probably care more about. I mean, each skemcle has more than one nuclei, making its transcriptional patterns just weird.

56) “Synaptic Refinement.” Most of this chapter uses ocular dominance columns as a model system for studying synapses. That’s useful, but so much of today’s work on synapses relies on techniques like two photon microscopy and NT uncaging that I wish more general principles had also been discussed. Sections on neurexin and neureglin were nice, and Figure 55-17 is cool, but then at the end they said that those two adhesion molecules might not actually be that important in vivo and I was a bit let down.

57) “Repairing CNS Damage.” Awesome chapter on axon degeneration and iPSCs, seems very modern and hopefully progress here will be plentiful before the 6th edition. Could have gone more into detail about the neural stem cell niche, e.g. the role of ependymal cells there. Myelin-neuron interactions sections were good.

58) “Sexual Dimorphism.” This chapter could have gotten *really* clinical but they did a good job of staying general; Figure 58-5 is a good example of this. One thing that slightly put me off was the sentence at the end of the introduction. Where is the evidence that our predecessors were really constrained by the simplistic view that genes and experiences acted independently of one another? I ask because we often paint past thinkers as more to make ourselves look more nuanced, and it’s dangerous.

59) “Aging.” Really this chapter is on Alzheimer’s. A bit amyloid beta heavy for my taste, but I’m probably biased.

60) “Language.” High-level chapter; most of the neuro relies on lesion and imaging studies.

61) “Processing Disorders.” Fairly basic chapter. The Libet experiment is introduced and explained for the third time. Figure 61-10 is amazing, showing the difference in brain activity in the parahippocampus place area and fusiform face area when people see and think about a face or a house.

62) “Schizophrenia.” Short chapter. Clearly this is an important disorder and deserves its own section. I found myself wondering how well some of the findings, such as the decreased dlPFC spine density in Fig 62-4, have been replicated?

63) “Mood and Anxiety Disorders.” Has lots of nice figures showing where various drugs are believed to act. At times I found myself wondering, where is the evidence of efficacy? But upon reflection this is beyond the scope of their introductory chapter. Seems slightly much for the authors to assert that ketamine itself is not likely be a successful antidepressant; who can say a priori where the trade-off of some amount of dissociative symptoms vs rapid improvement in depressive symptoms will fall for a particular patient?

64) “Neurodevelopmental Disorders.” This is a well-written chapter, using plain English to describe what happens in autism and other developmental disorders. Just barely missed the new DSM-V, which would have affected some of what they wrote.

65) “Learning and Memory.” Essential reading — different types of memory defects, basic and important conditioning paradigms, and a lot of their neurological substrates.

66) “Cellular Memory Storage.” Another banger. Lots on Aplysia, which of course is Kandel’s classic muse, and in particular learning mechanisms involved in the gill-withdrawal reflex. Fig 66-13 shows amygdala learning electrophysiology at the local field potential level. Fig 66-16 is also really cool. To me it felt a little CREB-heavy, I really wish there were maybe one more example of a transcription factor and then we could induct some principles, but maybe this is more a critique of the field (and really, the funding environment) than the textbook.

67) “The PFC and Hippocampus.” Nice chapter to finish on — they save a lot of cool stuff for the end. Gives nice examples of how LTP relies on a family of processes that are specific to each synapse and tend to covary across brain regions. One nit is that the text focuses a little too much on PKM-z, especially given that a paper came out within the past year since the text was published that largely refutes its role in long-term LTP. Still, Fig 67-9 is a really nice diagram, and Fig 67-16 showing remapping of place field formation with and without LTP, is beautiful.