Monday, July 10, 2017

Taking yesterday's post further, the question quickly becomes: what the hell is motor preparatory activity for? Note that when I say preparatory activity I'm talking about e.g. delayed reaching tasks and other experiments where they record from the dorsal premotor cortex (PMd), in primates. What about rodents?

Maybe this question is even deeper than it sounds. Rodents are a real mystery because it turns out you can completely wipe out a rat's motor cortex and it continues moving and living normally, with one caveat: if you introduce an unexpected perturbation into the environment they are used to (i.e. some behavioral assay that they were trained on prior to the motor cortex removal), they don't seem to know what to do about it. That might be a bit of an overinterpretation of behavior from my part, but the point is the only kinds of situations where de-motor-corticated rats behave differently from rats with a motor cortex is when such unexpected perturbations are introduced. And, funnily enough, after the first time they are introduced to the perturbation they quickly adapt and deal with it just like normal rats. This result is at the crux of Adamp Kampff's work trying to figure out what the hell cortex is really for. Their working hypothesis is that it is a brain structure evolved to produce robust behaviors - behaviors that are resistant to all kinds of unexpected situations, absolutely vital for survival (e.g. https://www.youtube.com/watch?v=u73hRPH4RQs).

Zooming back in to rodents vs primates, the picture looks somewhat like this: whereas primate motor cortex directly controls muscles, rodent motor cortex seems to provide highly specialized input to subcortical structures that directly control muscles. I probably need to read up more on what exact anatomical connections exist, but clearly muscle control can be performed by rodents without their motor cortex (through spinal reflex loops + subcortical input). What is the analog of this in primates? Could it be PMd? Usually lives in the nullspace, then jumps into potent space when absolutely needed. Delayed reaching task doesn't seem like that kind of unexpected situation where this kind of processing would be needed, but then again rat motor cortex presumably is not silent during usual run-of-the-mill motor activity.

Here's an idea for nullspace computation: the motor cortex is constantly processing and updating its model of the environment (w.r.t. what motor movements are useful to make - e.g. there is a stable wall to my left and an unstable wall to my right so make sure to hold on to the left in case of an earthquake), ready to exploit this information when need be by jumping into the potent space. There's a rodent and primate experiment there. Could PMd be doing something like this? After all, cuing a reaching target is an update to the kind of information about the world needed to behave correctly in the task. As is applying a force field during reaching.
Back to computation in the nullspace. How is it useful? Easy to see how its useful for preparatory motor activity: the null-potent space distinction allows for gating the downstream effect of neural activity, so that it can act in a preparatory manner. In other words, by living in the nullspace, preparatory activity is safe to plan the next movement without interfering with the current one.

But lets break this picture down a little further. What is a "movement"? Where is the distinction between the current movement and the next one currently being planned? Does the motor system really work this way? The simplest way to imagine what motor systems do is take some vector representing what you want achieved and then calculating and spitting out what set of muscle commands will achieve that. Where does preparatory come into play there? The idea in the literature seems to be that motor cortex is a "dynamical system", with the property that it behaves quite differently when put in different initial conditions. The preparatory activity's job is thus to choose the right initial conditions. But again - what does "initial" mean? In papers, the initiation of the movement is given to you on a platter with the delayed reaching task "Go" cue. But in the real world there are no Go cues. And there is no delayed period between target onset and go cue (except in really contrived situations, e.g ready,set,go!). So what does preparatory activity do then? It's a bit of a mystery to me. I should probably read more, but it sounds like a classic case of abstracting principles of neural activity from a highly contrived and artificial experimental task, i.e. modern calcium imaging era neuroscience.

One possibility is the existence of primitive motor movements. In this case, there are very distinct units called "movements", and each one can be produced by simply initializing the motor system in the right way and then letting it run (presumably with feedback). I'm pretty sure the idea of motor primitives has been around for a while, but I should look more into it. The idea is quite appealing from a learning perspective too, whereby simple tasks can be easily accomplished by a sequence of motor primitives (i.e. a sequence of different initial conditions) but harder tasks require learning new motor primitives, which might require some rewiring of e.g. motor cortex.

I'm actually currently working on this problem right now - although avoiding the issue of learning for now and just trying to see if we can hardwire this into a network. What I'm finding right now, in very preliminary stages (just trying to do this with a linear dynamical system - which could be harder to do than in a nonlinear network, but linear systems are easy to work with analytically), is that it's really quite hard to design a dynamical system with the desired properties. You want a dynamical system that (1) produces highly (meaningfully) distinct trajectories when starting at distinct initial conditions but (2) is robust to small perturbations of its initial condition. The brain is really noisy, so (2) is just as important as (1). It is worth noting that - at least in rat barrel cortex - cortical dynamics look to be pretty chaotic, i.e. condition (2) doesn't hold. But one could easily imagine motor cortex is wired up in a certain way for (2) to hold. But what I'm finding is that making (1) hold is pretty hard.

One caveat I that came to mind: when we speak about the initial conditions of the system, I think it's important to note that the system is a closed loop feedback system. One could imagine the feedback makes it easier to make sure (2) holds, while the wiring makes (1) hold.

Friday, July 7, 2017

Not much on my mind this morning. Spent a long time this weekend helping my girlfriend with her masters dissertation research, delving into marketing and psychology journals. Pretty appalled by the quality of papers in that field. Often seem to do all the right analyses and statistical tests and I think I even saw corrections for multiple comparisons at times. But then they don't include a single plot! Bizarre. Am I seriously supposed to go through and actually read the results section? Read the numbers and the p values and test statistics? Ridiculous. To each their own I guess. The actually bad part was mostly the methods. Often terribly written and badly done. Brings back memories from my undergrad in reading really bad psychology papers - the marketing stuff seems to fall right in that corner of the field that gives psychology such a bad rep. i.e. I wonder how much of this stuff is replicable

We're currently redesigning the systems and theoretical neuroscience course taught at the Gatsby. It's a course designed for both students from the Gatsby (from maths/physics/computer science backgrounds) and students from the SWC (from biology backgrounds) to take together. The structure says it all: two lectures per week, one in theory and one in biology. How do we do this well? To start thinking about this I lined up all the topics one would want to cover in a "foundations of theoretical neuroscience" class and then thought about what systems/biology topics fit along side, e.g. coding - sensory systems, optimal control - motor systems, networks - ?, ... It's not so easy. But the hardest part is designing the biology lectures in a certain way so that they compliment the theory (I am starting to sound quite theory-biased, not sure if that's a good thing :s). I absolutely hate classic "intro to visual system" lectures where they go through the classical picture of the visual system that you get from a textbook without really delving into detail. But maybe that's necessary to be able to go any further? I'm not sure. I think the key thing is to take the Marrian approach and always start from "what is the problem this system is trying to solve?" and then "What would you expect a system built to solve this look like?" to "What does it actually look like?" but now with an emphasis on the computational problem. But this obviously gives a highly incomplete picture, since there are many many important things observed experimentally for which we have no idea what they are for. Can't leave those out: this is the "bottom-up" side of theory, whereby we try to come up with a theory to explain an observation (as opposed to "top-down" theory where you specify a computation and try to come up with a theory for how a brain-like thing could do it, e.g. supervised learning -> backprop -> Tim Lillicrap's research). You could just tack these on to the end. I don't think this would be the worst idea in the world. Once you've already set up our investigation of the visual system as looking for ways in which the brain solves some problem, that already gives you some perspective and a framework within which to think about what these new puzzling observations mean. Another task is convincing someone to build their lecture this way - it's a lot more work than your typical intro to ___ system. Also, this is a very top-down (maybe theory-centric) way of thinking about how to teach neuroscience. I think it's the right way, but does everyone else?

The reason this just popped into my head is because one area that I realized was totally underrepresented was cognitive psychology. I think the classic macrostructure in systems courses is (sensory systems)-(motor systems)-(cognitive and learning systems). Indeed, this is mainly how we have divided it. Shouldn't cognitive psychology have a place in that last section? It's not entirely clear, which I think is very sad. Usually that last section consists of reinforcement learning, conditioning, memory and decision making (in the sense of e.g. random dot stereograms), with their biological counterparts in reward systems, neuromodulators, hippocampus, LIP stuff. What about language? What about reasoning? There is loads of research out there on these higher level truly "cognitive" phenomena. But unfortunately we have no way of mapping these to biology. In my opinion none of that research is anywhere close to, mainly because they are phenomena unique to human beings (in some sense making them the most important to study) so we can't do calcium imaging or ephys - just fMRI or EEG from time to time. That said, there are a lot of classic results and interesting patterns in the data. Just because we can't relate it to brains, it doesn't necessarily mean we shouldn't include it in a systems course. Or does it? There is something to be said here about what systems neuroscience students should know. But I think there is also something deeper to be said about the direction of such research. How far can we take such investigations without grounding them in biology? There is a reason psychology is one of the fields suffering the most from the replication crisis...

Wednesday, July 5, 2017

Coming back to yesterday's post - does the muscle activation --> limb movement mapping really change in regular life? I mentioned it because it certainly does in these classic force field reaching experiments - but when do you ever encounter a force field in real life? By and large I think this mapping stays relatively constant. Except maybe when you are lifting weights. I haven't been able to think of another situation where it doesn't.

So my current outlook is that the motor system has hardwired into it the forward mapping from muscle activation to limb movement. And the reversed mapping as well, possibly (likely, but maybe not necessary?). It's key job then is to figure out what limb movements to make. Given some goal, infer what sequence of movements is needed to achieve it. As I mentioned in the previous post, this inference will depend on a lot of factors about your environment. A given limb movement will have very different consequences in different environments. More on this another time.

Recently I've been thinking about computation in the nullspace. There is this big idea going around in the motor cortex literature since around 2010ish sprung by the excellent work of Krishna Shenoy and Mark Churchland. The idea is that we should think about motor cortex as a dynamical system, whereby different initial conditions lead to different trajectories in phase space that translate to different limb movements. The prediction then being that in a delayed reaching task, the preparatory activity (in dorsal premotor cortex) that occurs during the delay period (a fixed length time interval between target onset and reaching movement onset at a go cue) serves to put the system in the right initial conditions to generate the appropriate reach. But this raises a suddenly obvious question: how does this dorsal premotor cortex preparatory activity not generate movements? Their answer: it lives in the nullspace of motor cortex activity, meaning that it doesn't affect motor cortex activity. E.g. if a motor cortical neuron has 2 presynaptic inputs with weights +1 and -1, all activity patterns in the nullspace are such that these two neurons are equally active (so the post-synaptic neuron is silent). Or something along these lines - we don't really have a mechanistic circuit model to explain these phenomena yet....

Could nullspace activity be useful for other tasks computations? One recent study up on bioarxiv by Juan Gallego and colleages shows that in a force field task exactly like that I described in the previous post, the only difference in preparatory activity between early (not learned, so bad reaching performance) and late (learned, with stable good reaching) force field trials within a session was activity in the nullspace. Makes sense from the above point of view of using the nullspace to set up the right initial conditions - presumably you need different initial conditions when there is a force field. But nullspace activity is happening all the time, concurrent with potent space activity (the opposite of the nullspace - activity that does affect downstream areas). How can we use this for efficient computation? What are other situations where it could be useful?

Monday, July 3, 2017

This is my first stream of consciousness writing post. I've decided to try to do this daily, or at least weekdaily, in the morning to just have a place to put ideas down on paper. I've been thinking a lot about motor control and motor learning. What are the kinds of mechanisms the brain needs to do this? Seems obvious that control theory is relevant here - should probably delve into some of that literature. There is so much neuroscience literature to look at first though, it's a bit overwhelming. The big question that seems to me to remain totally unanswered is where the learning takes place. Not where in the brain but where in the computational graph so to speak. To be able to generate the right motor commands to achieve a given goal requires you to 1. know the mapping from goal to [limb] movement, and 2. the mapping from neural activity to [limb] movement. And in fact the second mapping contains two mappings in it: 2a. from neural activity to muscle contraction, and 2b. from muscle contraction to limb movement. The hard part I guess is that mappings 1 is not unique - it is not injective (surjective? I always forget...). Mapping 2b is definitely unique, but in fact it turns out under certain conditions 2a sometimes is not unique (cf. some Science paper from early 2000's).

Mapping 1 is not unique but it is obviously constrained, particularly for easy problems. You wouldn't swing your arm around your head and throw it out in front of you just to reach for a coffee mug right in front of you - that's a massive waste of energy (and possible risk of injury). So the space of muscle movements that map on to a given motor goal are certainly a subspace highly constrained already by a reasonably obvious set of cost considerations (this solves Bernstein's famous problem if you consider that your costs are only specific to the task-relevant errors - Todorov & Jordan 2002). I think mapping 1 is also highly variable, depending on the environment. If you want to reach for an object on an elevated surface, you will probably use totally different movements if the surface you are standing on is perfectly stable or out of balance.

Let me talk about one other situation: the highly contrived experimental paradigm of making reaching within a force field. This paradigm has been studied over and over again by people like Emilio Bizzi and others in the past decade as a way of investigating motor learning, or motor "adaptation" (inevitably will be more on this distinction on another post). Now let's put it in our picture of mappings 1-2a,b. The force field leads to a change in the movement produced by a given muscle activation pattern. Well shit this is a change in mapping 2b - the only one we said was unique. So now we have a twofold problem: we need (to learn) a context-specific (non-unique) mapping 1 and (presumably unique) mapping 2b.

Do we lump them together? Which of these (or both) correspond to the famous forward models allegedly found in the cerebellum etc.? Will have to get back to you on this..

My mind keeps going to model selection when I think about this context-specific mapping idea. Are we capable of learning these models on the fly? Or do we store a learned set of them that we can switch between? If the latter, we need a method for selection. This is a famously hard problem in statistics, although maybe for reasons that are not applicable here (i.e. higher complexity = higher likelihood). More on this another time!

Monday, October 24, 2016

Commentary on "Can a biologist fix a radio?"

I was recently assigned to write a couple of pages commenting on the famous paper "Can a biologist fix a radio?" from 2002, commenting on the importance of systems biology. I think it's highly relevant to modern neuroscience, where we seem to be encountering what Lazebnik (the author of the paper - http://www.cell.com/cancer-cell/abstract/S1535-6108(02)00133-2) calls David's Paradox: every year we have more and more newer and better findings, but we don't seem to be getting much closer to a holistic understanding of how the brain works.

This is what I had to say: 


Lazebnik’s 2002 paper leads to some very important conclusions that I believe are ultimately right, but I fundamentally disagree his arguments from the radio analogy. He seems to believe that the task facing a biologist trying to understand a cell is similar to that facing an engineer trying to understand a radio. I would claim that this is simply false.
The first obvious reason is that cells indeed are much more complex than radios. Lazebnik seems to believe this is an overstatement, claiming that biologists would also find a radio to be vastly more complex than any engineer would judge it to be. I would argue that, in general, biological phenomena pose significantly more complex scientific problems than their physical counterparts. Maybe this only seems to be the case because as of yet we don’t have many universal principles or laws to explain them. But I think this is mainly due to the fact that biology really is that complicated – even the most universal principles we find tend to have dramatic exceptions (e.g. ploidy, epigenetics, extremophiles). Physics, on the other hand, tends to have an easier time postulating principles that hold across all instances of a given phenomenon (e.g. electromagnetism). There’s a reason why we’ve been able to tackle physics questions using mathematical formulae for about 2000 years, while mathematical modeling has only arisen as a tool in biology in the past couple of decades. Because life is such an active process, the number of components interacting together leads to systems that are much harder to explain than transistors and capacitors carefully wired together to transform electromagnetic waves into sound waves.
Cell and radio science differ at a much more fundamental level, however. Even if you forget the complexity argument I put forth above, there is a sense in which the problem of understanding a cell fundamentally differs from the problem of understanding a radio: since we built them, we know exactly what radios do. But we don’t actually know what cells are for! We know that they are useful for survival – otherwise they wouldn’t have evolved. But we don’t even know if they are the optimal solution to the problem of survival (since natural selection is a satisficing, rather than optimizing, process), and we can only guess as to how it is that they provide an evolutionary advantage. Certain signal transduction pathways may seem to provide immunological benefits, and others may look like they serve purely metabolic purposes. But fundamentally we can’t actually know what a biological process is for, we can only hypothesize. An engineer approaching a radio, on the other hand, knows exactly what a radio is for. More than that, she also knows what capacitors and transistors are for.
Note how crucial this information is to repairing or understanding an object of study. The engineer can approach the open radio and think about what processes are necessary to transform electromagnetic waves into sound waves. She can make some hypotheses about what components are necessary to implement these processes and then look for them in the machine. She can also interpret the consequences of removing certain components, thus aiding her understanding of what each component does. But a biologist can do none of this. When a biologist removes a component from the system, they may have no idea what has gone wrong. This simple fact enables Lazebnik’s cartoon: it is hard for a biologist to infer anything deeper than whether the machine still works, leading to such simple classifications as most important/really important/undoubtedly most important components. A more nuanced understanding of the deficiencies of the system is impossible without knowing what the system is actually supposed to do.
Before going on to describe how I would approach the radio problem, I pause to note how this is particularly true for brain science. We don’t really have any idea of what the brain’s components are actually for. Sensory neuroscientists seem to pretend to know what sensory systems are for, but we don’t actually even know that. Is the visual system designed to minimize the error between percepts and the real world? (Hoffman, 2009) Is it designed to learn a generative model of the statistics observed in the natural world? (Schwartz et al., 2007) Is it designed to constantly predict what will appear in the visual scene? (Friston, 2009) This idea in fact stands at odds with one of the central tenets of cognitive science: Marr’s three levels of analysis. David Marr contended that to understand the brain we must start by specifying what it is meant to be doing (the “computational level”). But, unlike a radio or a cash register (Marr’s own analogy; Marr, 1980), we can’t know what the brain does. Presumably, all we really know for sure is that it evolved through natural selection, so must be useful for survival. But we can’t interrogate how or why natural selection did what it did, only speculate.
It is for this reason that repairing or understanding a cell is inherently a different problem from repairing a radio. If I were to repair a radio, I would take as a starting point its purpose. Then, I would approach the problem of transforming electromagnetic waves into sound waves from first principles, trying to postulate properties that the radio must have to be able to do this. Then I would open it up and see if I can find anything that endows the radio with these properties. If, through my exploratory analyses, I were to find some other principles at play inside, I would try to see how these fit into a solution to the overarching problem of transforming radio waves to sound waves. Note that this approach relies on the fact that I know what this overarching problem is. It allows me to interpret what I find in the radio and to execute my dissection in a guided and principled manner.
Biologists don’t have this luxury. We need to uncover the overarching principles from the ground up. However, this does not mean that our dissection should comprise the core of our investigation. On the contrary, it is crucial that we maintain an active theoretical examination to allow experimental findings to build on each other so we can eventually reach the universal principles and answer the questions about functional significance (which lead to practical repairs). This is where I coincide with Lazebnik. There is no way the experimental findings can build on each other without precisely formulated theories. As he aptly articulates in the article, such theories are only possible with a universal formal language such explicit wiring diagrams, or mathematics.

REFERENCES

Friston, K. (2009). The free-energy principle: a rough guide to the brain?. Trends in cognitive sciences13(7), 293-301.

D. Hoffman. The interface theory of perception: Natural selection drives true perception to swift extinction. In Object categorization: Computer and human vision perspectives, S. Dickinson, M. Tarr, A. Leonardis, B. Schiele (Eds.) Cambridge, UK: Cambridge University Press, 2009, 148–165

Marr, D. (1982). Vision: A computational investigation into the human representation and processing of visual information.

Schwartz, O., Hsu, A., & Dayan, P. (2007). Space and time in visual context.Nature Reviews Neuroscience8(7), 522-535.


Saturday, October 15, 2016

First post!

To my many many readers:
I guess I should start by introducing myself and motivating what this blog is supposed to be. I'm a grad student in London, just starting my PhD in computational neuroscience. A computational neuroscientist hopeful, I spend a lot of time thinking about the brain and even more about how to do so. It is not hard to see that we are currently sitting right in the golden age of neuroscience: the past decade has seen a non-stop stream of mind-blowing innovations in tools for looking at and measuring the brain, and multi-billion dollar brain science funding efforts are popping up at the national level. But, along with this insane acceleration in methodology we seem to have seen, if anything, a deceleration in ideas for how to understand how the brain works.
I stress now, with all honesty, that this is what I care about: how the brain works. My curiosity could care less about brain disease and treatment - these things are (the most) important, of course - but the questions and ideas bubbling in my brain don't tend to touch on any of these things at all. I wonder how this piece of meat could be so powerful. I wonder how its microscopic building blocks interact to produce its macroscopic behaviors. I wonder how computers are so much better than it at putting two and two together, yet absolutely helpless at seeing and learning about the world. This piece of meat can learn an entire language perfectly by just crying, eating, and sleeping for two years. It doesn't even have to try.
I bring up the comparison with computers particularly because computation is at the heart of modern neuroscience. One of the few fundamental ideas that everyone seems to agree on is that, like a kidney is built for filtering out toxic waste in the blood, the brain is an organ built for computing. As an undergrad, one of my most memorable lectures that was pivotal in putting me on my current path was about thinking of the brain as a Turing machine: just like this laptop works on the basis of binary switches, the brain might work on the basis of binary all-or-nothing action potentials. But when you really start unpacking the way computers work and the way the brain works, you quickly begin to see the problem is far bigger. We have no idea how brains compute and hence no idea how far, if at all, the analogy with digital computers goes.
So, since neuroscience seems to be so clueless about how the brain works (except for the fact that it computes - wait, what does that mean?), I have decided to do a great service to society and lend a helping hand. I study computational neuroscience at the Gatsby Computational Neuroscience Unit, working on neural computation with Peter Latham and Adam Kampff. For now, I am working from a theoretical perspective, meaning that I think a lot about how neurons might compute without actually watching them compute. This approach has its benefits and its pitfalls. So what's really important to me these days is trying to figure out exactly what role I want to play in neuroscience, if any. The theory side has really caught my fancy, as they say here, but it is really hard to see how far it can take us.
The idea behind starting this blog is as a way for me to reflect on these things. I always find that writing things down takes any ideas in my head orders of magnitude further, so hopefully this will do the same. I think one of the best ways to figure out exactly what kind of science you want to do is to try to imagine your ideal scientific result. Imagining yourself in that legendary eureka breakthrough moment that leads to a Nobel prize and 10 nature papers. What does that result look like? What kind of question is it answering? If this blog leads me to coming up with a concrete answer to this question then it will have done its job.
One thought I had today: I mentioned above that what my curiosity cares about is how the brain works. But maybe that's not the heart of the matter. The reason understanding how the brain works is so interesting is because of its awesome computational power. The inferences it makes on a milllisecond by millisecond basis, so effortlessly and with such messy biological machinery, are otherworldly. We see this especially through mathematics and computer science. Try solving the problem of vision and you will quickly see how impossibly hard it is. Yet we humans do it literally without trying. So maybe the more important question is not how, but why the brain is so good at what it does. The Turing machine has provided us exactly the language to be able to formally think about the problems the brain solves so easily, and we have been able to exploit it to build incredible machines that solve a lot of these problems. But it is striking how much better digital computers are at some things and how much better the brain is at others. Why? Answering this question would answer some of the biggest questions about the brain, and provide the breakthrough AI has been waiting for for so long. Can we answer it by investigating complex interactions between integrate-and-fire neruons? Or are there some more intricate biological principles at play here? Maybe the answer is something akin to deep learning?
As you (I?) can see, mathematics and computer science are at the heart of how I think about the brain. But by putting biology somewhat aside, am I missing out on all the best clues? Maybe the core principles behind the brain's computational power are deep biological properties (molecular? cellular? dendritic? genetic?).
To my now much fewer readers: hopefully not all my posts will be this long.