Exaggerating Significance

I have yet to meet a successful scientist who lacks the ability to exaggerate the importance of what he or she is doing, and I believe that someone who lacks a delusional sense of significance will wilt in the face of repeated experiences of multiple small failures and rare successes, the fate of most researchers.

-Daniel Kahneman, Thinking, fast and slow (2011)

I came across this quote on the homepage of a postdoc in Alex Pouget’s lab: https://sites.google.com/site/rubencoencagli/publications.

I do wish things were different and wonder if someone who does “lack the ability to exaggerate the importance of what he or she is doing” can in fact be a successful researcher. I wish more psychology journals would publish technically sound work but not on the basis of subjective perceived significance (which really is a matter for future researchers to decide, and also goes contrary to the very goal of peer review: to remove subjectivity and to ensure that the conclusions follow from the methods and that the methods are scientifically sound). If more journals explicitly removed significance as perceived by the editor or the reviewers as an acceptance criterion, I think we’ll see a lot more honest scientific writing.

This is also one other reason for me wanting to move towards an even more mathematical area within cognitive science than I’m currently in (which is already pretty quantitative in nature) – I would like to let the math speak for itself so that I don’t have to go to great lengths to justify why the work is significant. For example, a computational account of why higher order correlations in the activity of a population of neurons matters for the amount of information that can potentially be decoded (e.g. Michel & Jacobs, 2006) , is pretty self-explanatory when it comes to a question about significance. This self-explanatory aspect of the work though is not a result of just how quantitative it is, but is also a function of the type of question asked. As long as there is a clear question and as long as there is a computationally sound answer to the question, a statement about significance should be almost identical to the conclusion (e.g. we conclude that higher order correlations in the activity of a population of model neurons contain significant information about binocular disparity).

So to conclude, I think one can engage in impactful work if one 1) chooses important questions to answer (importance however is subjective, but I’d rather have informed subjectivity play a role here rather than during peer review), 2) utilizes technically sound methods to answer the question, and 3) lets the work speak for itself without having to exaggerate significance.

References

Michel, M. M., & Jacobs, R. A. (2006). The costs of ignoring high-order correlations in populations of model neurons. Neural computation, 18(3), 660-682.

Neurons and conductance based models

This is a summary of Ch 2, Trappenberg. Some topics covered: 1) Biological details of neurons 2) Synaptic mechanisms and dendritic processing 3) Generation of action potentials: Hodgkin-Huxley equations and 4) Compartment models which can take into account neuronal morphology.

Biological details

The two main cell types in the nervous system are neurons and glial cells. We seldom hear much about glial cell models because until recently, they were thought to have just supporting functions. However, neuron-glial interactions and glial network computations are currently being explored. We will now talk about neurons. I will assume that the reader has basic familiarity with the different parts of the neuron (soma, dendrites, axon, etc). Let us focus on the kinds of neurons present in certain brain areas as these have important computational consequences. Pyramidal neurons (pyramid shaped soma, 75-90% of the neocortex!) and stellate neurons (star shaped) are the most common type of neocortical neurons. Stellate neurons can be spiny or smooth. Spiny stellate neurons and pyramidal cells both have spines or boutons on their dendrites. Pyramidal cells connect up with asymmetrical looking neurons. Both pyramidal and spiny stellate neurons are thought to be excitatory. Smooth stellate neurons form inhibitory connections with symmetrical looking neurons. Neurons receive input from many other neurons, typically on the order of 10,000. Some pyramidal cells in the hippocampus receive around 50,000 or more inputs.

Chemical details

The inside of the neuron has more K+ than the outside of the neuron and the outside has more Na+ than inside. Now consider just the K+ ion channel (which is permeable to K+ under certain conditions). The concentration difference leads to diffusion of K+ from the inside to the outside of the cell. This diffusion process leads to an excess negative charge inside and excess positive charge outside the cell since the anions that typically bind to K+ cannot pass through the same channel. However, the electrical force generated by the potential differences ultimately balances out the diffusive forces generated by the concentration difference and the cell settles to an equilibrium state.  The equilibrium potential (calculated from the Nernst equation) for the potassium channel is around -80mV. This is called the reversal potential. Another major ion that contributes to the membrane potential of the neuron is Na+ whose concentration is greater outside the cell than inside to start with. Doing the same calculation for Na+ and combining results across multiple channels of different ions leads to the resting potential of the neuron, which is typically around -65mV. Ion channels are basically proteins embedded in the cell membrane that form pores of certain shapes that make them permeable to certain ions but not others. The major ions that operate in signal transmission within and between cells are Na+, K+, Ca2+ and Cl-. The ion channels that contribute to the resting potential (primarily K+ and Na+) are typically open all the time and are called leakage channels.

Communication between neurons typically take place via chemical synapses. However, there are other ways in which neurons communicate. One example is electrical synapses or gap-junctions which have special conducting proteins that facilitate a direct EM signal transfer.

Synaptic mechanisms and dendritic processing

The arrival of an action potential at the axon terminal triggers a probabilistic release of neurotransmitters into the synaptic cleft. Common neurotransmitters are glutamate (Glu), gamma-aminobutyric acis (GABA), Dopamine (DA, implicated in motivation, attention and learning) and acetylcholine (ACh, important for initiating muscle movements, found in neuromuscular junctions). These neurotranmitters bind to specific receptor sites on the post-synaptic dendrites and can open or close neurotransmitter-gated ion channels.  A larger amount of neurotransmitters released from the presynaptic neuron typically produces a stronger response in the postsynaptic neuron but this relationship need not be a linear one. Also, the same neurotransmitter can have different effects on the postsynaptic neuron depending on the receptor type that the neurotransmitter attaches to. Receptors that are tightly coupled to an ion channel are called ionotropic and these channels typically open rapidly after binding neurotransmitters. Metabotropic synapses on the other hand can only influence ion channels via secondary messengers and therefore are slower and less specific.

Neurotransmitters (and/or the associated synapse) can have excitatory or inhibitory effects on the postsynaptic neuron. Excitatory neurotransmitters/synapses facilitate the entry of positively charged ions into the postsynaptic cell thereby increasing it’s membrane potential. Synaptic channels gated by the Glu neurotransmitter are common examples of excitatory synapses. Now, the dynamics of the excitatory process is influenced by the type of receptors present in the synapses. For example, AMPAR is an ionotropic Glu receptor that can be activated quickly by alpha-Amino-3-hydroxy-5-Methylisoxazole-4-Propionic Acid. NMDAR is another type of Glu receptor but it is voltage gated and its action is slower. The reason that it is voltage gated and slow to act is because the associated channels are blocked by Mg2+ ions in the neuron’s resting state and it takes a depolarization to kick the Mg2+ out of place to facilitate the influx of Na+ and Ca2+, the two main ions involved in excitation. An example of an inhibitory neurotransmitter is GABA with associated receptors. GABA_A is a fast receptor and GABA_B is a slow receptor. The neurotransmitter DA has both excitatory and inhibitory associated receptors.

A computational note about inhibition: Inhibition can be subtractive or divisive. Subtractive inhibition happens when the postsynaptic membrane potential is lowered overall. Divisive inhibition is when inhibitory synapses have a modulatory (i.e., multiplicative)  effect on excitation. GABA_A receptors for example have no effect on the membrane potential if the neuron is at rest. So now imagine the case when the neuron gets an excitatory influx of cations. GABA_A receptors now act to reduce the excitatory effects but ceases to have any effect when the neuron comes back to rest. This modulation of summed excitatory postsynaptic potentials (EPSPs) is also called shunting inhibition.

Modeling Synaptic Responses

The time course of a postsynaptic potential (PSP) can be measured experimentally. The time course for fast excitatory AMPA and inhibitory GABA receptors can be described by an alpha function(Eq 2.1). The alpha function is merely a description of how the PSP changes in time after some presynaptic event. Now, to implement the alpha function, we need a model of the dendrite and the synaptic mechanism. For this, we treat a segment of the dendrite as one compartment with a certain capacitance and with a constant resistance (i.e., a leaky capacitor). We also add in a neurotransmitter gated resistor that is supplied with a battery that represents the potential difference (caused by the concentration difference of the ions) between the inside and the outside of the cell. Conservation of electric charge as described by Kirchhoff’s law then gives you the relationship between the capacitance of the dendritic compartment, the current flowing into the cell and the infinitesimal change of membrane potential in time. This linear differential equation can be solved to get the time course of the membrane potential (which will hopefully look like the alpha function).

We can now incorporate more details into this model. The current for example can be assumed to be produced by two channels: a leakage channel (see earlier) with a certain conductance and a zero reversal potential (since we will measure all other potentials relative to this) and a neurotransmitter-gated ion channel with a time varying conductance and a certain reversal potential. So we now also have a time varying channel conductance that we need to know to be able to solve the differential equation. We can model the time varying conductance with a model of average channel dynamics. The assumption behind this simple model is that many channels open as soon as the channels bind the neurotransmitters but then close stochastically much like radioactively decaying material. The integrated time course of the conductance then looks like an exponential decay (except at t=t_delay after the presynaptic spike, which is a free parameter, and at which time there is an additional contribution to the conductance). I’m assuming the role of t_delay is in order to model other factors (like interactions between ion channels maybe and other dendritic mechanisms) that might increase the average conductance at a certain time delay after the first presynaptic spike. t_delay is set to 0.01 in the simulations that produced figure 2.5B, so they are neglecting those contributions.

Now the opening of the neurotransmitter gated channel gives rise to a synaptic current.  The current depends on the conductance of the channel as well as on the membrane potential relative to the reversal potential of the neurotransmitter gated channel. If the channel remains open, then an equilibrium is attained where the synaptic current through the neurotransmitter gated channel matches the leakage current. However, neurotransmitter gated channels close rapidly and so the compartment ultimately comes back to its resting state.

Generating Action Potentials: Hogkin-Huxley equations

In the previous section, we explored a simple model of how the release of neurotransmitters after a presynaptic event can change the membrane potential of a postsynaptic dendrite. We now need to model how the change in membrane potential can ultimately lead to an action potential being formed in the postsynaptic neuron. Alan Hodkin and Andrew Huxley measured the action potential generated in the axon of a giant squid. The form of that action potential is shown in Fig 2.6. The membrane potential first increases (depolarization) followed by a sharp decrease which undershoots the resting potential (hyperpolarization) finally coming back to resting potential. Hodgkin and Huxley did not stop there, they came up with equations that described the form of the action potential. The mathematical terms in their equation were later identified with specific ion channels.

To generate the above form of the action potential, two types of voltage-dependent ion channels and one type of static ion channel are necessary at the very minimum. We talked about neurotransmitter-gated ion channels in the previous section. Voltage-gated channels open and close depending on the membrane potential. When neurotransmitters bind and open neurotransmitter-gated ion channels and when the membrane is sufficiently depolarized, voltage-dependent Na+ channels open (immediately after the required threshold is crossed by the membrane potential) and Na+ ions rush into the cell leading to the initial rising phase of the action potential. The membrane potential is taken close to the sodium channel resting potential which is around +65mV. The falling phase of the action potential is caused by two processes. First, the sodium channels get blocked by a protein that is part of the channel around 1ms after the channel opens. At around the same time voltage-gated potassium channels open. Now, these channels are slow to open, unlike the sodium channels which opened immediately after the threshold was crossed. The potassium channels take about 1ms to open. The sodium channels close by that time. So when the sodium channels close and the potassium channels open, there is an efflux of K+ leading to a drop in membrane potential. Since the potassium channels now dominate, the membrane potential settles at the potassium resting potential of around -80mV which is lower than the resting potential (i.e., undershooting the resting potential or hyperpolarization). Hyperpolarization causes the potassium channels to close and the cell eventually returns to its resting potential.

Finally, neurons very often need to repeatedly generate action potentials. Now, if that is the case, the repeated influx of Na+ and efflux of K+ would eventually decrease the K+ concentration and increase the Na+ concentration inside so much that influx of Na+ and efflux of K+ is no longer possible. This is why we have ion pumps in our neurons. These are ion channels that can pump ions against the concentration gradient. This requires considerable energy. Ion pumps account for around 70% of the total energy consumption of neurons and about 15% of the total energy consumed by human beings!

Hodgkin-Huxley Equations:

The Hodgkin-Huxley model of spike generation is a set of 4 coupled differential equations. I_ion is the net movement of ions across the membrane and g’_ion is the conductance of ion channels (determined by the number and permeability of the open channels). g’ denotes that this is dependent on time and voltage. Ohm’s law gives the relation between the current, conductance and membrane potential V relative to its resting potential (Eq 2.7). Three variables n, m and h capture respectively the activation of K+ channels, activation of Na+ channels and the inactivation of Na+ channels. The voltage and time dependence of conductance g’_Na+ and g’_K+ then can be written as a modulation of a constant conductance g_ion by time and voltage dependent n(V,t), m(V,t) and h(V,t) variables (Eq 2.8 and 2.9). The specific forms of 2.8 and 2.9  and the dynamics of these channels were chosen such that the measured action potential of the giant squid could be described well. The dynamics are modeled by a set of 3 linear differential equations, one for each variable. Finally, since neurons store electrical charge, that can be represented by the presence of a capacitance in the circuit (Fig. 2.8). The entire spike generating mechanism can therefore be represented by a capacitance that is in parallel with three resistors (ion channels) each with its own battery (the reversal potential). Two of the resistances (Na and K channels) vary depending on the state of the system while one is constant (the leakage channel). Kirchhoff’s law gives a 4th linear differential equation. The 4 differential equations are coupled via the dynamic variables n,m and h.

Using the Hodgkin-Huxley model, we can now measure the frequency of spikes as a function of input current. This is called the activation function. Three properties of a Hodgkin-Huxley neuron via simulations:

  1. The neuron fires only when the input current crosses a certain threshold.
  2. There is a narrow range of frequencies (starting at around 53Hz) at which the neuron fires. Also, the frequency doesn’t increase by much even if the input current is increased.
  3. Finally, it can be shown that high frequency noise in the current has the tendency to linearize the activation function (see Fig. 2.11) and produce a more graded response (than what is mentioned in 1).

Why can’t the neuron fire above a certain frequency or rate? The inactivation of sodium channels when the membrane is hyperpolarized towards the end of an action potential renders it impossible to generate the next action potential for another 1ms after the sodium channels are inactivated (around 1ms after the open initially leading to the rising phase of the action potential). This period is called the absolute refractory period and limits the firing rate to a maximum of 1000 Hz (1 spike/ms). It is also difficult to generate the next action potential during the hyperpolarizing part of the current action potential. This is called the relative refractory period and this further reduces the maximum possible firing rate. Brainstem neurons can fire at high frequencies, sometimes more than 600Hz. However, cortical neurons fire at much lower frequencies, sometimes 1-2Hz. Refractory periods alone cannot explain these extremely low firing rates. It is important to consider issues like interactions within the network (e.g. inhibitory interneurons) to be able to explain this and other details of neuronal firing patterns.

The generated action potential is propagated down the axon via saltatory conduction. This enables faster propagation of the signal. Myelin sheath covering the axon make it impossible for ions to cross the membrane along the distance covered by the myelin sheath. However, there are breaks in the sheath called the nodes of Ranvier. The conducting fluid inside the axon elevates the potential at the neighboring node of ranvier once an action potential is generated at the axon hillock. This triggers the generation of an action potential at the node of Ranvier. This proceeds down the length of the axon. So instead of having to generate action potentials all along the axon, the neuron now only regenerates the action potential at the nodes of Ranvier. This makes for fast propagation of the action potential. Different sources say different things about whether or not this is a lossy propagation. My feeling (which is what Tappenberg says in his book as well) is that it is lossy. One reason could be that channel make up and dynamics could be different at the different nodes of Ranvier. So the exact form of the action potential might not be regenerated at each node (just my intuition, need to verify this). Finally, this method of propagation also ensures that the action potential is unidirectional. The previous node of Ranvier is inactive due to the sodium channel that is inactive for around 1ms. So the action potential tends to proceed in the forward direction. However, there is evidence that action potentials can backpropagate to the dendrites and the change in potential brought about by backpropagating signals influence Ca2+ concentrations which has implications for models of plasticity like the Spike-time-Dependent Plasticity (STDP) model.

Extension of the Hodgkin-Huxley equations to include more details: The Wilson Model:

Time constants in the equations for the Na+ and K+ dynamic variables in the H-H model show a reciprocal pattern and so the dimensionality of the model can be reduced by setting h=1-n. Simulations also show that the dynamic fo m is extremely fast. So that variable can be replaced with its equilibrium value. Neocortical neurons also don’t show inactivation of the fast Na+ channels. So h can be set to h=1. So a system of two differential equations can be shown to behave very much like the full H-H system.

We can now add in more channels to this system that we know play an important role in mammalian nervous systems. Two types of channels are especially relevant: 1) Ca2+ ion channels with more graded response characteristics (required to produce the graded responses of neocortical neurons), and 2) a channel (like a Ca2+ mediated K+ channel) that produces a slow hyperpolarizing current  (necessary to generate more complex firing patterns like those seen in human brains).

The Wilson model can reproduce many aspects of mammalian spike generation, some of which are:

  1. Regular spiking neurons (RS)
  2. Fast spiking neurons (FS) – typically found in inhibitory interneurons in the neocortex when stimulated by a constant external current.
  3. Continuously spiking neurons (CS)
  4. Intrinsic bursting neurons (IB)

Firing rates can go down after an initial stimulation of the neurons. This is called spike rate adaptation or fatigue. The Wilson model can capture this due to the presence of the slow hyperpolarizing channel equation. See Fig. 2.13 for other complex patterns that are well described (and produced) by the Wilson model.

Putting the pieces together – Including neuronal morphology (compartment models):

In addition to taking into account conductance properties of ion channels, we must also consider shapes of neurons to fully understand how they work. We use compartment models to achieve this goal. The neuron and its many dendrites are broken up into hundreds or thousands of small compartments, each governed by a 1st order differential equation.

To get to the 1st order differential equation, we start with the cable equation (Eq 2.26) which describes the spatio-temporal variation of a potential along a cable-like conductor driven by an injected current. The dimensions and physical properties of the cable are taken into account within the linear cable equation. Hodgkin-Huxley style voltage-gated ion channels can also be incorporated into the model yielding nonlinear cable equations which can be solved numerically.  Finally, by considering small enough compartments (such that the potential inside a compartment can be assumed to be constant), the spatial differentials in the cable equation can be reduced to difference equations and a 1st order differential equation results for that compartment. So the final compartment model for the neuron is described by a set of first order differential equations.

That concludes this post. We saw how the neuronal spike generation can be modeled in great detail. In the next couple of posts, we will look at abstractions of neurons so that we can eventually incorporate simplified neuron models into network models.

 

EDIT: Please read the edit to the introductory post on this blog. I have now decided that a better use of my time is to simply summarize the results of interesting articles/book chapters, instead of summarizing entire articles and chapters. If my posts then spike your interest, please read the original articles for more information. I would like to focus this blog more on my thoughts and evaluation of articles and books. I summarize Ch 2 of Trappenberg in this post. The rest of the book is fabulous as well. In fact, I enjoyed it so much that I think I will seriously consider doing a post-doc job in a computational neuroscience lab that works with such spiking neuron models in trying to explain memory phenomena. I have a couple of labs in mind and I will talk about some brilliant articles that have come out of those labs in my future posts.

 

 

 

Preliminary Considerations

I have done three courses on neural networks in grad school. However, since we usually worked with simplified neurons such as the McCulloch-Pitts neuron when talking about neural networks in those courses, I’ve always wanted to learn about detailed models of the neuron.  Only if you have a sense about all the details that go into a full fledged model of the neuron, can you tell if your simplifying assumptions are justified when putting these neurons into a network that is supposed to perform a certain function.  The first part of Trappenberg does exactly that though there are other books that go into even more detail. For now, I’m happy with what I got out of Trappenberg. However, before I get into the details of individual neurons and populations of neurons, let me talk a little bit about what considerations guide neuro-computational research. 

One might wonder if we had sufficient information about single neurons, why couldn’t we just simulate the whole brain? Why is it considered one of the biggest challenges of the millennium? Even if we overcame computational hurdles and were able to simulate more than 100 billion neurons each with 5000-10000 synapses connecting up with other neurons, something that students of the brain must keep in mind is that this is not a race towards scaling up to the size of the brain in modeling and simulations. We cannot build (simulate) a brain unless we know what kinds of computations it can and needs to do. Imagine wanting to build a radio by putting together transistors (assuming you know all there is to know about transistors) without actually knowing what a radio is supposed to do; what signals it needs to work with, what frequency ranges to aim for, what kind of circuitry can achieve the desired functions, etc. So it is important to ask simpler questions about the computational capability of the brain using scaled down models. This is also where you decide on what level of abstraction to use. For example, if your goal is to understand what triggers epilepsy, asking questions about systems level brain dynamics is probably going to be more fruitful than carefully modeling individual neurons since we know that seizures are caused by synchronization of entire areas of the brain. However, in many cases, insights from other levels can be useful as well but again, it is important to take into account the nature of your question when making a decision about what level of analysis to focus your research on.

As a guiding principle, it is a good idea to use experimental data to understand general principles of brain processing and then use that knowledge to build systems that can mimic brain functions to test various structural and computational assumptions made by your theory. Some general properties of neural systems that we do know about are parallel distributed processing (the use of many parallel processors, though not independent ones!), interaction of neurons (by being part of large connected networks), emergence (neural systems develop properties that were not encoded directly into them. These emergent properties are the result of certain rules that we do assume to begin with, however, we cannot be satisfied by specifying the rules alone because even a small set of rules can give rise to a variety of emergent properties) and adaptation/learning (brains can learn new stuff and adjust it’s response depending on the incoming stimuli from the environment)

Marr’s levels of analysis: David Marr suggested that different considerations motivate studies at different levels of analysis. The important distinct levels and their corresponding goals according to Marr are:

1) Computational Theory: goal of the computation, its appropriateness and the logic behind it. Marr opined that it was important to consider the nature of the problem rather than delve straight into hardware and implementation issues as insights into computational properties can be gained through carefully considering the nature of the problem that needs to be solved.

2) Representation and algorithm: Implementation considerations: what the representations are for the input and output, and what algorithm does the necessary transformation.

3) Hardware implementation: How can the representation and algorithm be physically realized? In our line of work, we consider biological plausibility a big deal. However, engineers who work with neural networks with purely computational goals don’t care about biological plausibility as long as the computational goal is achieved by the network.

All that said, big simulations of a large number of neurons can be exciting, especially if it can achieve great computational feats. Eliasmith et al. (2012) simulated 2.5 million neurons in what is considered the state of the art on how to implement an architecture based on spiking neurons. The model (Semantic Pointer Architecture Unified Network (SPAUN)) can perform multiple tasks without having to modify the architecture for the different tasks. Though significant scaling has been achieved in the past by constructing the brain in a bottom-up fashion (e.g. a simulation of 100 billion neurons (Izhikevich & Edelman, 2008)), SPAUN is the first architecture to demonstrate the utility of top-down influences on large scale simulations of the brain by establishing a link between scaled up simulation and observable behavior. There are limitations to the architecture (e.g. it cannot learn a new task whereas the brain can easily do so). However, its success lies in the functionality of the architecture and its link to directly observable behavior. Now, what exactly is a spiking neuron? This and other related topics will be the focus of the next couple of posts.

Finally (some more stuff from Ch 1 of Trappenberg), why exactly do we need a brain? Though the question might seem silly at first glance, it is a question that is worth considering. Plants do just fine without brains. A sea creature called the sea squirt originally has a small nervous system. It then swims around and finally attaches itself to a rock and then digests its brain! So it seems like one of the primary goals of creatures with brains is to move around, likely to find food and sexual partners, in order to maximize probability of survival. So sensorimotor control is a very important function of the brain. Simple feedforward neural networks can achieve the computational goal of taking in sensory input and converting that to motor signals no matter how complex the mapping is (since feedforward networks are universal function approximators). However, the brain does much more than respond to environmental input and therefore needs to have more than just feedforward networks to achieve its computational goals. It requires sophisticated feedback systems to achieve extremely fast computations that even computers that are many orders of magnitude faster than a neuron cannot achieve. The brain is thought to be actively interacting with the environment by anticipating what the environment is going to throw at it at every instant. Consider for example, visual scene  analysis. Simon Thorpe showed that humans can categorize visual stimuli after an exposure lasting just a few tens of milliseconds. Neural processing takes around 10ms per synaptic stage according to general guidelines. That leaves a feedforward system with the daunting task of completing the computation within a few hundred steps but the algorithms we currently know that work with feedforward systems need thousands of steps even to achieve a piece of the computation. This is one of many reasons why the brain is thought to be an anticipatory memory system with sophisticated feedback loops. Anticipation or prediction by the brain is one of the key areas of research that I will be involved with during the next couple of years.

On to neurons and conductance based models of neurons.

References

Trappenberg, T.  (2010).  Fundamentals of computational neuroscience  (2nd  ed.). Oxford:  Oxford University  Press.

Eliasmith, C., Stewart,  T. C., Choo, X., Bekolay, T., DeWolf, T., Tang, C., & Rasmussen,  D. (2012). A large-scale model of the functioning  brain.  Science , 338, 1202–1205.

Izhikevich, E., M., & Edelman, G., M. (2008). Large-scale model of mammalian thalamocortical systems. PNAS, 105, 3593-3598.

Introduction

I am currently in a Cognitive Psychology Ph.D program. I have been reading some very interesting books and papers for my candidacy exam. The plan is to summarize some of the more interesting ones here and to write about my thoughts regarding the issues discussed in these articles/chapters. Since I came in with a degree in Physics, I knew very little about Psychology and Neuroscience. Over the past 4 years, I’ve developed a keen interest in neural network modeling, Bayesian methods, cognitive psychology and neuroscience in general. To further my knowledge about computational neuroscience, I’ve been reading  “The Fundamentals of Computational Neuroscience” by Thomas Trappenberg which was recommended very highly to me by one of my committee members who is an excellent cognitive scientist himself. I am fortunate to also have leading Bayesian methodology researchers on my committee. So I look forward to reading cutting edge papers in that area. Finally, my own adviser is a brilliant young cognitive neuroscientist. I am particularly interested in how context is represented in the brain, which also happens to be my adviser’s area of expertise.

I hope the readers of this blog find my writings useful in their own quest for knowledge about the brain.

EDIT: I have now successfully completed my candidacy exams. Rather than summarize chapters and articles in their entirety here as I did for a couple of posts, I will henceforth summarize the results of interesting articles and then write about my thoughts regarding them. I feel like that is a better use of my time and yours because if what you read here does spike your interest, you can always get the full story by reading the original article/book (which you must if you decide you need more information about it).