Samuel Wilks on the need for multidisciplinary neurologic research (in 1864)

A history of dementia often starts in 1907 with the work of Alois Alzheimer, but in reality it should start much sooner. In 1864 Samuel Wilks wrote “Clinical Notes on the Atrophy of the Brain,” which was one of the first studies to point out gross atrophy of the sulci in the brains of persons who had dementia prior to death. This is a great paper! I loved the intro:

WERE an occasional comparison instituted between the experiences of those who practise in special but different departments of the profession, it would conduce not only to the fulfilment of some higher general truths than we now possess, but afford to the individual labourer in his department a more just and less narrow view of the field of observation which is always more immediately before his eye. A close observance to one section of medicine may produce much accurate and minute knowledge, but since the division of our art into branches is artificial rather than real, the knowledge therein obtained is regarded apart from its natural relations, and becomes so distorted as to lose much of its value as truth. If the various sciences into which we divide nature for the purposes of study are artificial, and it be true that an exclusive devotion to one of them can never give to its follower a correct insight into the operations of nature, so more true must it be that the general laws of human pathology can scarcely be gleaned in an exclusive practice in one single department.

It may seem almost impertinent to make these remarks in a Journal devoted to a special object, nor were they, indeed, intended to apply to the study of mental disorders, which must be undertaken in an almost isolated manner; and yet an opinion has obtained hold of me (which, however, may be erroneous) that even here some too narrow views may be held of cerebral pathology, and this opinion, right or wrong, has suggested the remarks in the present communication. To be more explicit: I have thought that those who are occupied in the practice and study of any one department might possibly look upon some morbid condition or other feature in a case, as peculiar to a certain form of disease. Thus, in connection with the subject on which I purpose to make a few remarks, it has seemed to be inferred that a certain morbid phenomenon has been found exclusively in lunatic asylums; and, at the same time, to be inferred by a writer on infantile diseases, and who is probably destitute of the knowledge just mentioned, that this phenomenon is intimately connected with the cerebral affections of children. So, also, with the general subject of the following observations, atrophy of the brain: this has appeared to me to have been regarded by some as a condition attaching to those who have died of mental affections, and not only so, but of some special form of insanity; others would describe a similar condition as resulting from repeated attacks of delirium tremens; whilst others write of a state not distinguishable from these as the ordinary result of old age. From having no inclination towards any of these special departments, I have endeavoured to take a comprehensive view of such pathological changes, and, as regards the subject before us, to discover at what stage our knowledge has reached of this morbid condition, and what is its true pathological significance; leaving it for further research to elucidate its varieties and the different methods by which these are brought about.

Targeting neural reserve in dementia

David Bennett has an interesting proposal for future AD/dementia trials. He points out that a large percentage of persons diagnosed with AD actually have mixed pathology, and that solving each of those pathologies with an individual agent is going to be difficult and costly. Instead, he proposes that if one were to target neural reserve — i.e., resilience against dementia — that might be a more tractable strategy.

[C]onsider neural reserve as a therapeutic endpoint. There is no evolutionary pressure to create systems that protect the brain from any brain pathology of old age, let alone different systems that offer protection from different pathologies. Thus, finding that myriad factors alter the trajectory of cognitive decline agnostic to underlying brain pathologies is expected. A hypothetical therapeutic agent that targets neural reserve could be used to offset any and likely all common brain pathologies that alter cognition.

I guess the main problem with this proposal is that it might be harder to find this sort of an all-encompassing pro-cognitive aging agent. If it’s easily available as an exogenous chemical and has a strong effect, why didn’t natural selection already sculpt our brain so that it — or its effects — would be present? I disagree that there is prima facie no evolutionary pressure for this, given the potential importance of kin selection in our evolutionary history.

But that’s just speculation and Bennett’s strategy deserves serious consideration, especially given the series of failures of AD clinical trials.

In terms of already available agents, my immediate thought is nootropics — drugs that are meant to boost one’s cognition. Some of the top nootropics that are commonly discussed are caffeine, nicotine, and modafinil. Caffeine has been shown to be protective against dementia in the epidemiologic studies you read about in the news every week. A very small trial of modafinil (n = 23) was not helpful for apathy in AD. Nicotine has been fairly widely studied in AD but its efficacy is still not clear. All of them deserve more attention, but it’s difficult to fund these trials because generic versions of all of them are now available.

Is progeria related to aging in the same way that familial AD is related to sporadic AD?

Attention conservation notice: Someone has probably made this point before.


Progeria is a genetic disorder caused by mutations in the lamin A nuclear lamina protein. Since it manifests in several ways that resemble an aged state (eg wrinkled skin, atherosclerosis, kidney failure, loss of eyesight), it is widely believed to be an early-onset version of aging.

Yet, few people think that nuclear membranes are the only thing that is altered in aging, as aging is generally considered too complicated for that. Instead, nuclear membranes are recognized to be one aspect within a larger pathway that is altered in aging.

Familial Alzheimer’s disease (AD) is a genetic disorder caused by mutations in APP, PSEN1, or PSEN2, which are all part of the APP processing pathway and thus (among other things) amyloid plaque production. Since it manifests in several ways that resemble sporadic AD (episodic memory loss, Aβ plaques, tau tangles), it is widely believed to be a an early-onset version of sporadic AD.

In contrast to progeria and aging, familial AD is generally thought to be a model of sporadic AD that captures almost all of the key pathways involved. As a result, one of the major justifications for clinical trials to treat sporadic AD by removing amyloid plaques is that the genetics of familial AD are all related to APP processing and thus amyloid plaque production.

There are probably several good arguments for why this progeria:aging::familial AD:sporadic AD contrast doesn’t make sense, but I still thought it might be interesting.

Making a shiny app to visualize brain cell type gene expression

Attention conservation notice: A post-mortem of a small side project that is probably not interesting to you unless you’re interested in molecular neuroscience.


This weekend I put together an R/Shiny app to visualize brain cell type gene expression patterns from 5 different public data sets. Here it is. Putting together a Shiny application turned out to be way easier than expected — I had something public within 3 hours, and most of the rest of my time on the project (for a total of ~ 10 hours?) was spent on cleaning the data on the back end to get it into a presentable format for the website.

What is the actual project? The goal is to visualize gene expression in different brain cell types. This is important because many disease-relevant genes are only expressed in one brain cell type but not others, and figuring this out can be critical to learning about the etiology of that disease.

There’s already a widely-used web app that does this for two data sets, but since this data is pretty noisy and there are subtle but important differences in the data collection processes, I figured that it’d be helpful to allow people to quickly query other data sets as well.

As an example, the gene that causes Huntington’s disease has the symbol HTT. (I say “cause” because variability in the number of repeat regions in this gene correlate almost perfectly with the risk of Huntington’s disease development and disease onset.) People usually discuss neurons when it comes to Huntington’s disease, and while this might be pathologically valid, by analyzing the data sets I’ve assembled you can see that this gene is expressed across a large number of brain cell types. This raises the question of why — and/or if — variation in its number of repeats only causes pathology in neurons.

Screen Shot 2016-06-13 at 11.35.10 AM

Here’s another link to the web app. If you get a chance to check it out, please let me know if you encounter are any problems, and please share if you find it helpful.

References

Aziz NA, Jurgens CK, Landwehrmeyer GB, et al, et al. Normal and mutant HTT interact to affect clinical severity and progression in Huntington disease. Neurology. 2009;73(16):1280-5.

Huang B, Wei W, Wang G, et al. Mutant huntingtin downregulates myelin regulatory factor-mediated myelin gene expression and affects mature oligodendrocytes. Neuron. 2015;85(6):1212-26.

Eight years of tracking my life statistics

Attention conservation notice: Borderline obsessive navel-gazing.


Most mornings, I start my day — after I lie in bed for a few minutes willing my eyes to open — by opening up a Google spreadsheet and filling in some data about how I spent that night and the previous day. I’ve been doing this for about eight years now and it’s awesome.

I decided to post about it now because self-tracking as a phenomenon seems to be trending down a bit. Take for example former WIRED editor Chris Anderson’s widely shared tweet:

Screen Shot 2016-05-07 at 6.07.48 PM

So this seems a good time to reflect upon the time I’ve spent self-tracking so far and whether I’m finding it useful.

But first, a Chesterton’s fence exercise: why did I start self-tracking? Although it’s hard to say for sure, here’s my current narrative:

  • When I was a senior in high school, I remember sitting in the library and wishing that I had extensive data on how I had spent my time in my life so far. That way when I died, I could at least make this data available so that people could learn from my experiences and not make the same mistakes that I did. I tried making a Word document to start doing this, but ultimately I gave up because — as was a common theme in my misspent youth — I became frustrated with myself for not having already started it and decided it was too late. (I hadn’t yet learned about future weaponry.)
  • I used to read the late Seth Roberts’ blog — it was one of my favorites for a time — and he once wrote a throwaway line about how he had access to 10 years of sleep data on himself that he could use to figure out the cause of his current sleep problems. When I read that early in college I thought to myself “I want that.”
  • In sophomore year of college my teacher and mentor Mark Cleaveland assigned me (as a part of a class I was taking) to write down my sleep and how I spent my time in various activities for a week. This was the major kick into action that I needed — after this, I started tracking my time every morning on the spreadsheet.

It takes about 66 days to develop a habit. The more complex the habit, the longer it takes. I think that by about 100-150 days in it was pretty ingrained in me that this was just something that I do every morning. After that, it didn’t take much effort. It certainly did take time though — about 3-5 minutes depending on how much detail I write. That’s the main opportunity cost.

Three of the categories I’ve tracked pretty consistently are sleep, exercise, and time spent working.

Here’s hours spent in bed (i.e., not necessarily “asleep”):

Screen Shot 2016-05-07 at 8.54.22 PM

black dots = data points from each day; red line = 20-day moving average

Somewhat shockingly, the mean number of hours I’ve spent in bed the last 8 years is 7.99 and the median is exactly 8.

Here’s exercise:

Screen Shot 2016-05-07 at 8.58.39 PM.png

I’m becoming a bit of a sloth! Hopefully I’ll be able to get this back up over the next few years. Although note that I have no exercise data for a few months in Summer ’15 because I thought that I would switch solely to Fitbit exercise data. I then got worried about vendor lock-in and started tracking manually again.

Here’s time spent working (including conventional and non-conventional work such as blogging):

Screen Shot 2016-05-07 at 8.49.54 PM

One of the other things I’ve been tracking over the past few years is my stress, on an arbitrary 1-10 scale. Here’s that data:

Screen Shot 2016-05-07 at 9.12.26 PM

In general, my PhD years have been much less stressful than my time studying for med school classes and Step 1. Although it’s not perfect, I’ve found this stress level data particularly valuable. That’s because every now and then I get stressed for some reason, and it’s nice to be able to see that my stress has peaked before and has always returned to reasonably low levels eventually. I think of this as a way to get some graphical perspective on the world.

I track a few other things, including time spent on administrative tasks (like laundry), time spent leisure reading, time spent watching movies, and time spent socializing.

I also track some things that are too raw to write about publicly. Not because I’m embarrassed to share them now, but because I’m worried that writing them in public will kill my motivation. This is definitely something to consider when it comes to self-tracking. For me, my goal has first and foremost been about self-reflection and honesty with myself. If I can eventually also share some of that with the world, then more’s the better.

Overall, I’ve found three main benefits to self-tracking:

  1. Every now and then, I’ll try to measure whether a particular lifestyle intervention is helping me or not. For example, a couple of months months ago I found that there was a good correlation between taking caffeine (+ L-theanine) pills and hours worked. Although this is subject to huge selection bias, I still found it to be an interesting effect and I think it has helped me optimize my caffeine use, which I currently cycle on and off of.
  2. There have been a few times these past 8 years when I’ve suddenly felt like I’ve done “nothing” in several months. One time this happened was about a year into my postbac doing science research at the NIH when it seemed like nothing was working, and it was pretty brutal. That time and others, it’s been valuable for me to look back and see that, even if I haven’t gotten many tangible results, I have been trying and putting in hours worked. Especially in science where so many experiments fail, it’s helpful for me to be able to measure progress in terms of ideas tried rather than papers published or some other metric that is much less in my control. GitHub commits could also work in this capacity for programmers, although that’s slightly less general.
  3. The main benefit, though, has not been my ability to review the data, but rather as a system for incentivizing me to build process-based habits that will help me achieve my goals. I enjoy the bursts of dopamine I get when I’m able to write that I worked hard or exercised the previous day — or that I got a lot of high-quality socializing in with friends or family — and it makes me want to do that again in the future.

Do you want to try a similar thing? Check out this blank Google spreadsheet for a quick start; it has a bunch of possible categories and a few example days for you to delete when you copy it over to your own private sheet. I like Google sheets because they are free and able to be accessed anywhere with an internet connection, but it’s certainly not a requirement.

Even if you don’t try it, thanks for reading this essay and I hope you got something out of it.

 

Arterial aging and its relation to Alzheimer’s dementia

I’m a big proponent of the role of arterial aging in explaining dementia risk variance, in large part because it explains the large role that vascular-related risk factors have in promoting the likelihood of Alzheimer’s disease (AD). However, some data suggests that the burden of ischemic events and stroke cannot explain all of the vascular-related AD risk. Recently, Gutierrez et al. published a nice paper which suggests that non-atherosclerotic artery changes with age may explain some of this residual vascular-related risk of AD. In particular, they used 194 autopsied brains and found five arterial features which strongly correlated with aging, including decreased elastin and concentric intimal thickening. Importantly, these features also correlated with AD risk independently of age.

The authors propose that the arterial aging features are a consequence of mechanical blood flow damage that accumulates over the years. If it is true that the damage is mechanical, it suggests that it may be difficult to reverse with existing cellular and molecular anti-aging therapies. For those people who are interested in slowing down aging, the brain must be a top priority because it cannot be replaced even by highly advanced tissue engineering approaches to replace the other organs. Thus, this sort arterial damage needs to be addressed, but to the best of my knowledge it has not been, which is one of the many reasons that I expect that serious anti-aging therapies are much further out than are commonly speculated in the popular press.

Are four postulated disease spectra due to evolutionary trade-offs?

I recently read Crespi et al.’s interesting paper on this subject. They describe eight diseases as due to four underlying diametric sets that can be explained by evolutionary/genetic trade-offs:

  1. Autism spectrum vs psychotic-affective conditions
  2. Osteoarthritis vs osteoporosis
  3. Cancer vs neurodegenerative disorders
  4. Autoimmunity vs infectious disease

Of these, #2 and #4 seem obviously correct to me based on my fairly limited med school exposure, and they describe the evidence in a systematic way. I don’t know enough about the subject matter to speculate on #1, but I would like to see more genetic evidence.

Finally, I found their postulated explanations for #3 somewhat weak and I personally think that it is a selection bias trade-off, i.e. a case of Berkson’s bias as applied to trade-off. That is, since both cancer and neurodegeneration are age-related conditions, you could think of aging as the “agent” that selects either neurodegeneration or cancer as the ultimate cause of age-related death. I could be persuaded to change my mind on the basis of genetic predisposition evidence or some other mechanism, but I found the mechanism of apoptosis to be weak since apoptosis occurs (or doesn’t occur when it should) in many, many diseases, and moreover it is far from clear that neurodegeneration is mostly due to apoptosis as opposed to some other mechanism of cell death. A mechanism that might be most persuasive to me is one related to immune cells, since they clearly play a large role in regulating cancer growth, and also have high expression for the most GWAS risk factors for Alzheimer’s disease. But I still suspect that the selection bias is primary.