This is a summary of Ch 2, Trappenberg. Some topics covered: 1) Biological details of neurons 2) Synaptic mechanisms and dendritic processing 3) Generation of action potentials: Hodgkin-Huxley equations and 4) Compartment models which can take into account neuronal morphology.
The two main cell types in the nervous system are neurons and glial cells. We seldom hear much about glial cell models because until recently, they were thought to have just supporting functions. However, neuron-glial interactions and glial network computations are currently being explored. We will now talk about neurons. I will assume that the reader has basic familiarity with the different parts of the neuron (soma, dendrites, axon, etc). Let us focus on the kinds of neurons present in certain brain areas as these have important computational consequences. Pyramidal neurons (pyramid shaped soma, 75-90% of the neocortex!) and stellate neurons (star shaped) are the most common type of neocortical neurons. Stellate neurons can be spiny or smooth. Spiny stellate neurons and pyramidal cells both have spines or boutons on their dendrites. Pyramidal cells connect up with asymmetrical looking neurons. Both pyramidal and spiny stellate neurons are thought to be excitatory. Smooth stellate neurons form inhibitory connections with symmetrical looking neurons. Neurons receive input from many other neurons, typically on the order of 10,000. Some pyramidal cells in the hippocampus receive around 50,000 or more inputs.
The inside of the neuron has more K+ than the outside of the neuron and the outside has more Na+ than inside. Now consider just the K+ ion channel (which is permeable to K+ under certain conditions). The concentration difference leads to diffusion of K+ from the inside to the outside of the cell. This diffusion process leads to an excess negative charge inside and excess positive charge outside the cell since the anions that typically bind to K+ cannot pass through the same channel. However, the electrical force generated by the potential differences ultimately balances out the diffusive forces generated by the concentration difference and the cell settles to an equilibrium state. The equilibrium potential (calculated from the Nernst equation) for the potassium channel is around -80mV. This is called the reversal potential. Another major ion that contributes to the membrane potential of the neuron is Na+ whose concentration is greater outside the cell than inside to start with. Doing the same calculation for Na+ and combining results across multiple channels of different ions leads to the resting potential of the neuron, which is typically around -65mV. Ion channels are basically proteins embedded in the cell membrane that form pores of certain shapes that make them permeable to certain ions but not others. The major ions that operate in signal transmission within and between cells are Na+, K+, Ca2+ and Cl-. The ion channels that contribute to the resting potential (primarily K+ and Na+) are typically open all the time and are called leakage channels.
Communication between neurons typically take place via chemical synapses. However, there are other ways in which neurons communicate. One example is electrical synapses or gap-junctions which have special conducting proteins that facilitate a direct EM signal transfer.
Synaptic mechanisms and dendritic processing
The arrival of an action potential at the axon terminal triggers a probabilistic release of neurotransmitters into the synaptic cleft. Common neurotransmitters are glutamate (Glu), gamma-aminobutyric acis (GABA), Dopamine (DA, implicated in motivation, attention and learning) and acetylcholine (ACh, important for initiating muscle movements, found in neuromuscular junctions). These neurotranmitters bind to specific receptor sites on the post-synaptic dendrites and can open or close neurotransmitter-gated ion channels. A larger amount of neurotransmitters released from the presynaptic neuron typically produces a stronger response in the postsynaptic neuron but this relationship need not be a linear one. Also, the same neurotransmitter can have different effects on the postsynaptic neuron depending on the receptor type that the neurotransmitter attaches to. Receptors that are tightly coupled to an ion channel are called ionotropic and these channels typically open rapidly after binding neurotransmitters. Metabotropic synapses on the other hand can only influence ion channels via secondary messengers and therefore are slower and less specific.
Neurotransmitters (and/or the associated synapse) can have excitatory or inhibitory effects on the postsynaptic neuron. Excitatory neurotransmitters/synapses facilitate the entry of positively charged ions into the postsynaptic cell thereby increasing it’s membrane potential. Synaptic channels gated by the Glu neurotransmitter are common examples of excitatory synapses. Now, the dynamics of the excitatory process is influenced by the type of receptors present in the synapses. For example, AMPAR is an ionotropic Glu receptor that can be activated quickly by alpha-Amino-3-hydroxy-5-Methylisoxazole-4-Propionic Acid. NMDAR is another type of Glu receptor but it is voltage gated and its action is slower. The reason that it is voltage gated and slow to act is because the associated channels are blocked by Mg2+ ions in the neuron’s resting state and it takes a depolarization to kick the Mg2+ out of place to facilitate the influx of Na+ and Ca2+, the two main ions involved in excitation. An example of an inhibitory neurotransmitter is GABA with associated receptors. GABA_A is a fast receptor and GABA_B is a slow receptor. The neurotransmitter DA has both excitatory and inhibitory associated receptors.
A computational note about inhibition: Inhibition can be subtractive or divisive. Subtractive inhibition happens when the postsynaptic membrane potential is lowered overall. Divisive inhibition is when inhibitory synapses have a modulatory (i.e., multiplicative) effect on excitation. GABA_A receptors for example have no effect on the membrane potential if the neuron is at rest. So now imagine the case when the neuron gets an excitatory influx of cations. GABA_A receptors now act to reduce the excitatory effects but ceases to have any effect when the neuron comes back to rest. This modulation of summed excitatory postsynaptic potentials (EPSPs) is also called shunting inhibition.
Modeling Synaptic Responses
The time course of a postsynaptic potential (PSP) can be measured experimentally. The time course for fast excitatory AMPA and inhibitory GABA receptors can be described by an alpha function(Eq 2.1). The alpha function is merely a description of how the PSP changes in time after some presynaptic event. Now, to implement the alpha function, we need a model of the dendrite and the synaptic mechanism. For this, we treat a segment of the dendrite as one compartment with a certain capacitance and with a constant resistance (i.e., a leaky capacitor). We also add in a neurotransmitter gated resistor that is supplied with a battery that represents the potential difference (caused by the concentration difference of the ions) between the inside and the outside of the cell. Conservation of electric charge as described by Kirchhoff’s law then gives you the relationship between the capacitance of the dendritic compartment, the current flowing into the cell and the infinitesimal change of membrane potential in time. This linear differential equation can be solved to get the time course of the membrane potential (which will hopefully look like the alpha function).
We can now incorporate more details into this model. The current for example can be assumed to be produced by two channels: a leakage channel (see earlier) with a certain conductance and a zero reversal potential (since we will measure all other potentials relative to this) and a neurotransmitter-gated ion channel with a time varying conductance and a certain reversal potential. So we now also have a time varying channel conductance that we need to know to be able to solve the differential equation. We can model the time varying conductance with a model of average channel dynamics. The assumption behind this simple model is that many channels open as soon as the channels bind the neurotransmitters but then close stochastically much like radioactively decaying material. The integrated time course of the conductance then looks like an exponential decay (except at t=t_delay after the presynaptic spike, which is a free parameter, and at which time there is an additional contribution to the conductance). I’m assuming the role of t_delay is in order to model other factors (like interactions between ion channels maybe and other dendritic mechanisms) that might increase the average conductance at a certain time delay after the first presynaptic spike. t_delay is set to 0.01 in the simulations that produced figure 2.5B, so they are neglecting those contributions.
Now the opening of the neurotransmitter gated channel gives rise to a synaptic current. The current depends on the conductance of the channel as well as on the membrane potential relative to the reversal potential of the neurotransmitter gated channel. If the channel remains open, then an equilibrium is attained where the synaptic current through the neurotransmitter gated channel matches the leakage current. However, neurotransmitter gated channels close rapidly and so the compartment ultimately comes back to its resting state.
Generating Action Potentials: Hogkin-Huxley equations
In the previous section, we explored a simple model of how the release of neurotransmitters after a presynaptic event can change the membrane potential of a postsynaptic dendrite. We now need to model how the change in membrane potential can ultimately lead to an action potential being formed in the postsynaptic neuron. Alan Hodkin and Andrew Huxley measured the action potential generated in the axon of a giant squid. The form of that action potential is shown in Fig 2.6. The membrane potential first increases (depolarization) followed by a sharp decrease which undershoots the resting potential (hyperpolarization) finally coming back to resting potential. Hodgkin and Huxley did not stop there, they came up with equations that described the form of the action potential. The mathematical terms in their equation were later identified with specific ion channels.
To generate the above form of the action potential, two types of voltage-dependent ion channels and one type of static ion channel are necessary at the very minimum. We talked about neurotransmitter-gated ion channels in the previous section. Voltage-gated channels open and close depending on the membrane potential. When neurotransmitters bind and open neurotransmitter-gated ion channels and when the membrane is sufficiently depolarized, voltage-dependent Na+ channels open (immediately after the required threshold is crossed by the membrane potential) and Na+ ions rush into the cell leading to the initial rising phase of the action potential. The membrane potential is taken close to the sodium channel resting potential which is around +65mV. The falling phase of the action potential is caused by two processes. First, the sodium channels get blocked by a protein that is part of the channel around 1ms after the channel opens. At around the same time voltage-gated potassium channels open. Now, these channels are slow to open, unlike the sodium channels which opened immediately after the threshold was crossed. The potassium channels take about 1ms to open. The sodium channels close by that time. So when the sodium channels close and the potassium channels open, there is an efflux of K+ leading to a drop in membrane potential. Since the potassium channels now dominate, the membrane potential settles at the potassium resting potential of around -80mV which is lower than the resting potential (i.e., undershooting the resting potential or hyperpolarization). Hyperpolarization causes the potassium channels to close and the cell eventually returns to its resting potential.
Finally, neurons very often need to repeatedly generate action potentials. Now, if that is the case, the repeated influx of Na+ and efflux of K+ would eventually decrease the K+ concentration and increase the Na+ concentration inside so much that influx of Na+ and efflux of K+ is no longer possible. This is why we have ion pumps in our neurons. These are ion channels that can pump ions against the concentration gradient. This requires considerable energy. Ion pumps account for around 70% of the total energy consumption of neurons and about 15% of the total energy consumed by human beings!
The Hodgkin-Huxley model of spike generation is a set of 4 coupled differential equations. I_ion is the net movement of ions across the membrane and g’_ion is the conductance of ion channels (determined by the number and permeability of the open channels). g’ denotes that this is dependent on time and voltage. Ohm’s law gives the relation between the current, conductance and membrane potential V relative to its resting potential (Eq 2.7). Three variables n, m and h capture respectively the activation of K+ channels, activation of Na+ channels and the inactivation of Na+ channels. The voltage and time dependence of conductance g’_Na+ and g’_K+ then can be written as a modulation of a constant conductance g_ion by time and voltage dependent n(V,t), m(V,t) and h(V,t) variables (Eq 2.8 and 2.9). The specific forms of 2.8 and 2.9 and the dynamics of these channels were chosen such that the measured action potential of the giant squid could be described well. The dynamics are modeled by a set of 3 linear differential equations, one for each variable. Finally, since neurons store electrical charge, that can be represented by the presence of a capacitance in the circuit (Fig. 2.8). The entire spike generating mechanism can therefore be represented by a capacitance that is in parallel with three resistors (ion channels) each with its own battery (the reversal potential). Two of the resistances (Na and K channels) vary depending on the state of the system while one is constant (the leakage channel). Kirchhoff’s law gives a 4th linear differential equation. The 4 differential equations are coupled via the dynamic variables n,m and h.
Using the Hodgkin-Huxley model, we can now measure the frequency of spikes as a function of input current. This is called the activation function. Three properties of a Hodgkin-Huxley neuron via simulations:
- The neuron fires only when the input current crosses a certain threshold.
- There is a narrow range of frequencies (starting at around 53Hz) at which the neuron fires. Also, the frequency doesn’t increase by much even if the input current is increased.
- Finally, it can be shown that high frequency noise in the current has the tendency to linearize the activation function (see Fig. 2.11) and produce a more graded response (than what is mentioned in 1).
Why can’t the neuron fire above a certain frequency or rate? The inactivation of sodium channels when the membrane is hyperpolarized towards the end of an action potential renders it impossible to generate the next action potential for another 1ms after the sodium channels are inactivated (around 1ms after the open initially leading to the rising phase of the action potential). This period is called the absolute refractory period and limits the firing rate to a maximum of 1000 Hz (1 spike/ms). It is also difficult to generate the next action potential during the hyperpolarizing part of the current action potential. This is called the relative refractory period and this further reduces the maximum possible firing rate. Brainstem neurons can fire at high frequencies, sometimes more than 600Hz. However, cortical neurons fire at much lower frequencies, sometimes 1-2Hz. Refractory periods alone cannot explain these extremely low firing rates. It is important to consider issues like interactions within the network (e.g. inhibitory interneurons) to be able to explain this and other details of neuronal firing patterns.
The generated action potential is propagated down the axon via saltatory conduction. This enables faster propagation of the signal. Myelin sheath covering the axon make it impossible for ions to cross the membrane along the distance covered by the myelin sheath. However, there are breaks in the sheath called the nodes of Ranvier. The conducting fluid inside the axon elevates the potential at the neighboring node of ranvier once an action potential is generated at the axon hillock. This triggers the generation of an action potential at the node of Ranvier. This proceeds down the length of the axon. So instead of having to generate action potentials all along the axon, the neuron now only regenerates the action potential at the nodes of Ranvier. This makes for fast propagation of the action potential. Different sources say different things about whether or not this is a lossy propagation. My feeling (which is what Tappenberg says in his book as well) is that it is lossy. One reason could be that channel make up and dynamics could be different at the different nodes of Ranvier. So the exact form of the action potential might not be regenerated at each node (just my intuition, need to verify this). Finally, this method of propagation also ensures that the action potential is unidirectional. The previous node of Ranvier is inactive due to the sodium channel that is inactive for around 1ms. So the action potential tends to proceed in the forward direction. However, there is evidence that action potentials can backpropagate to the dendrites and the change in potential brought about by backpropagating signals influence Ca2+ concentrations which has implications for models of plasticity like the Spike-time-Dependent Plasticity (STDP) model.
Extension of the Hodgkin-Huxley equations to include more details: The Wilson Model:
Time constants in the equations for the Na+ and K+ dynamic variables in the H-H model show a reciprocal pattern and so the dimensionality of the model can be reduced by setting h=1-n. Simulations also show that the dynamic fo m is extremely fast. So that variable can be replaced with its equilibrium value. Neocortical neurons also don’t show inactivation of the fast Na+ channels. So h can be set to h=1. So a system of two differential equations can be shown to behave very much like the full H-H system.
We can now add in more channels to this system that we know play an important role in mammalian nervous systems. Two types of channels are especially relevant: 1) Ca2+ ion channels with more graded response characteristics (required to produce the graded responses of neocortical neurons), and 2) a channel (like a Ca2+ mediated K+ channel) that produces a slow hyperpolarizing current (necessary to generate more complex firing patterns like those seen in human brains).
The Wilson model can reproduce many aspects of mammalian spike generation, some of which are:
- Regular spiking neurons (RS)
- Fast spiking neurons (FS) – typically found in inhibitory interneurons in the neocortex when stimulated by a constant external current.
- Continuously spiking neurons (CS)
- Intrinsic bursting neurons (IB)
Firing rates can go down after an initial stimulation of the neurons. This is called spike rate adaptation or fatigue. The Wilson model can capture this due to the presence of the slow hyperpolarizing channel equation. See Fig. 2.13 for other complex patterns that are well described (and produced) by the Wilson model.
Putting the pieces together – Including neuronal morphology (compartment models):
In addition to taking into account conductance properties of ion channels, we must also consider shapes of neurons to fully understand how they work. We use compartment models to achieve this goal. The neuron and its many dendrites are broken up into hundreds or thousands of small compartments, each governed by a 1st order differential equation.
To get to the 1st order differential equation, we start with the cable equation (Eq 2.26) which describes the spatio-temporal variation of a potential along a cable-like conductor driven by an injected current. The dimensions and physical properties of the cable are taken into account within the linear cable equation. Hodgkin-Huxley style voltage-gated ion channels can also be incorporated into the model yielding nonlinear cable equations which can be solved numerically. Finally, by considering small enough compartments (such that the potential inside a compartment can be assumed to be constant), the spatial differentials in the cable equation can be reduced to difference equations and a 1st order differential equation results for that compartment. So the final compartment model for the neuron is described by a set of first order differential equations.
That concludes this post. We saw how the neuronal spike generation can be modeled in great detail. In the next couple of posts, we will look at abstractions of neurons so that we can eventually incorporate simplified neuron models into network models.
EDIT: Please read the edit to the introductory post on this blog. I have now decided that a better use of my time is to simply summarize the results of interesting articles/book chapters, instead of summarizing entire articles and chapters. If my posts then spike your interest, please read the original articles for more information. I would like to focus this blog more on my thoughts and evaluation of articles and books. I summarize Ch 2 of Trappenberg in this post. The rest of the book is fabulous as well. In fact, I enjoyed it so much that I think I will seriously consider doing a post-doc job in a computational neuroscience lab that works with such spiking neuron models in trying to explain memory phenomena. I have a couple of labs in mind and I will talk about some brilliant articles that have come out of those labs in my future posts.