Information

How do firing patterns arise from the activity of many ion channels?

How do firing patterns arise from the activity of many ion channels?



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

In his answer to another question, Bryan Krause says:

Ion channels don't exhibit any firing patterns: neurons exhibit firing patterns that depend on all the channels present [… ].

I understand it this way: The observable and measureable firing pattern1 of the neuron (created at the trigger zone) is the linear superposition of tiny "firing patterns2" of all the (voltage-gated) ion channels at the trigger zone, which in turn depend on the probabilities of opening and closing which are the same for all ion channels of the same type and obey some Hodgkin-Huxley-like law. These probabilities correspond directly to the shape of firing pattern1.

The rhythm with which each single sodium channel opens and closes ("firing pattern2") doesn't have to mimick firing pattern1 exactly, only roughly and probabilistically: the open-close-ticks must occur near to the spikes of firing pattern1, but not exactly and not at every spike. And some complete outliers are allowed. The rest is superposition.

Seeing things this way, indiviudual ion channels do exhibit "firing patterns2", but these may look quite different than firing pattern1 (although not completely).

Is this kind of reasoning correct?

If so: Can the following conclusion be drawn: The time scale of a firing pattern1 depends on the number of ion channels at the trigger zone? This might be seen when looking at the rise time of a single action potential: If there is only one sodium channel, rise time will be larger, if there are many, it will be smaller. Is this correlation strictly linear?


Typically the activities of the ion channels are not called firing patterns as in neuroscience we refer to "firing" when we mean the elicitation of action potentials (spikes) but yes: Whenever an AP was fired a sufficient amount of sodium channels had to be open and therefore I thing your reasoning is correct. In other terms what you are saying is that the effective channel conductances change during an action potential.

On your conclusion: In the regime of natural parameters the time scale of firing pattern 1 mostly depends on the time constants of the voltage gated ion channels and not so much on the absolute number of channels (especially if all the conductances would scale equally). In the world of Hodgin-Huxley like coupled- and nonlinear-dynamical systems the voltage does not scale strictly linear with increasing the participating max. conductances. The interplay of max. conductance and temporal gating dynamics resulting in the effective conductance itself depends on the voltage and there are nonlinearities. So the correlation between the number of channels and the AP rise time is not expected to be strictly linear.

See the figure for a little example: I tested my reasoning by increasing the max. conductance for the sodium channel of a simple HH model and checked the max. steepness of the voltage (normalized by peak voltage).

Hope that answers your question(s).


Let me summarize what I believe to have learned from Jojo's and Bryan's comments in form of a visualization: The red curve is the probability governing the open and closed state of an ion channel (as a function of time - we see two spikes) The dotted patterns below show 50 ion channels fluttering between open (black) and closed (white) states, according to the probability of being open. The thin black curve is a smeared out average of the first ion channel being open (showing two slight elevations, i.e. on a coarser time scale some kind of "firing pattern" - please forgive me, Bryan). The fat black curve is the smeared out average of all ion channels. The blue curve is the number of channels being open at a given point in time which is somehow proportional to the conductance, current, and/or voltage being measured. Noise is large due to the small number (25) of ion channels.

For 200 ion channels and overlaying the 200 patterns, it would look like this:

Find a higher resolution picture here.

See two related biological scenarios here and here.


Andrea Meredith: How do BK channel dynamics underlie circadian rhythmicity?

Andrea Meredith (Associate Professor, University of Maryland), studies how the BK potassium channel helps to regulate circadian rhythms. BK channels are voltage-activated potassium channels with intracellular calcium binding sites, and are found near calcium release sites in the cell. It is thought that calcium binding shifts the voltage sensitivity of the channel to a physiological range. Activation of these channels can hyperpolarize the membrane by 10-20mV.

BK channels are widely expressed, but extensively modulated based on the tissue they are expressed in. Changes in alternative splicing, modulatory subunits, and localization relative to calcium sources contribute to the variation in channel properties in different tissues. This diversity has profound effects on membrane properties and the physiology of cells.

Many phenotypes arise from perturbation of this channel – human studies show familial epilepsy, dyskinesia, autism, and a variety of other non-neurological symptoms, as BK channels are expressed throughout the body. Animal models of BK dysfunction exhibit problems with circadian rhythms, locomotion, hearing, learning and memory, and seizures, among other symptoms.

Dr. Meredith’s lab focuses on the circadian system. The suprachiasmatic nucleus (SCN), houses 20,000 neurons that fire rhythmically and change their activity in response to day or night to generate appropriate behaviors. A transcription-translation feedback loop activated by light resets the internal clock every day, but this mechanism can continue to function correctly in the absence of light.

Slice cultures of the SCN continue their rhythmic firing and their day-night activity patterns. What ion channels drive this firing rate? Many channels change their expression levels or behavior in the day vs. at night. BK channels have larger currents at night and smaller ones during the day. Expression levels of BK increase at night, even if the mice are raised in the dark, but change if the classic clock genes underlying the circadian clock are mutated. This makes BK one of the only known ion channel targets of the circadian feedback loop.

Knockouts of the BK channel exhibit normal firing during the day, but hyperactive firing at night. These animals are active all the time, instead of just at night. BK gain-of-function mutants have increased current during the day, equal to nighttime levels. This decreases firing during the day and makes the circuit largely arrhythmic, but has fewer behavioral consequences than the knockout.

How does the regulation of BK explain these phenotypes? The lab focused on the beta-2 modulatory subunit, as it is the only subunit expressed in the SCN that can inactivate the channel. The loss of beta-2 eliminates channel inactivation and the day-night difference in BK current. Levels of beta-2 do not change over the circadian cycle, but since the expression of the alpha subunit is always changing, the ratio between BK and beta-2 changes as well.

The beta-2 subunit can change BK channel properties in other ways in addition to inactivating it, including left-shifting the voltage-dependence and slowing kinetics. However, rescuing the beta-2 knockouts with only the N-terminus of the beta-2 subunit, a 45 amino acid peptide which is known to mediate inactivation only, was sufficient to restore day-night differences in firing.

Applying this inactivating peptide at night changed nighttime BK current levels into daytime levels, and neural excitability increases to daytime levels. More work on mechanism has shown that these changes in firing rate are due to changes in the baseline membrane potential in SCN neurons.

The lab is currently interested in recent work on how the day-night differences in SCN BK currents decrease with aging, and how this difference might be restored. Her lab is also studying the circadian dynamics of BK channels in many other tissues.

Further Reading:

Farajnia, Sahar, Johanna H Meijer, and Stephan Michel. 2015. “Age-Related Changes in Large-Conductance Calcium-Activated Potassium Channels in Mammalian Circadian Clock Neurons.” Neurobiology of Aging 36 (6).

Montgomery, Jenna R, and Andrea L Meredith. 2012. “Genetic Activation of BK Currents in Vivo Generates Bidirectional Effects on Neuronal Excitability.” Proceedings of the National Academy of Sciences of the United States of America 109 (46)

Montgomery, Jenna R, Joshua P Whitt, Breanne N Wright, Michael H Lai, and Andrea L Meredith. 2013. “Mis-Expression of the BK K(+) Channel Disrupts Suprachiasmatic Nucleus Circuit Rhythmicity and Alters Clock-Controlled Behavior.” American Journal of Physiology. Cell Physiology 304 (4)


Abstract

Various studies, mostly in the past 5 years, have demonstrated that, in addition to their well-described function in regulating electrical excitability, voltage-dependent ion channels participate in intracellular signalling pathways. Channels can directly activate enzymes linked to cellular signalling pathways, serve as cell adhesion molecules or components of the cytoskeleton, and their activity can alter the expression of specific genes. Here, I review these findings and discuss the extent to which the molecular mechanisms of such signalling are understood.


Abstract

Electrically excitable cells, such as neurons, exhibit tremendous diversity in their firing patterns, a consequence of the complex collection of ion channels present in any specific cell. Although numerous methods are capable of measuring cellular electrical signals, understanding which types of ion channels give rise to these signals remains a significant challenge. Here, we describe exogenous probes which use a novel mechanism to report activity of voltage-gated channels. We have synthesized chemoselective derivatives of the tarantula toxin guangxitoxin-1E (GxTX), an inhibitory cystine knot peptide that binds selectively to Kv2-type voltage gated potassium channels. We find that voltage activation of Kv2.1 channels triggers GxTX dissociation, and thus GxTX binding dynamically marks Kv2 activation. We identify GxTX residues that can be replaced by thiol- or alkyne-bearing amino acids, without disrupting toxin folding or activity, and chemoselectively ligate fluorophores or affinity probes to these sites. We find that GxTX–fluorophore conjugates colocalize with Kv2.1 clusters in live cells and are released from channels activated by voltage stimuli. Kv2.1 activation can be detected with concentrations of probe that have a trivial impact on cellular currents. Chemoselective GxTX mutants conjugated to dendrimeric beads likewise bind live cells expressing Kv2.1, and the beads are released by channel activation. These optical sensors of conformational change are prototype probes that can indicate when ion channels contribute to electrical signaling.


Contents

Richard Caton discovered electrical activity in the cerebral hemispheres of rabbits and monkeys and presented his findings in 1875. [2] Adolf Beck published in 1890 his observations of spontaneous electrical activity of the brain of rabbits and dogs that included rhythmic oscillations altered by light detected with electrodes directly placed on the surface of the brain. [3] Before Hans Berger, Vladimir Vladimirovich Pravdich-Neminsky published the first animal EEG and the evoked potential of a dog. [4]

Neural oscillations are observed throughout the central nervous system at all levels, and include spike trains, local field potentials and large-scale oscillations which can be measured by electroencephalography (EEG). In general, oscillations can be characterized by their frequency, amplitude and phase. These signal properties can be extracted from neural recordings using time-frequency analysis. In large-scale oscillations, amplitude changes are considered to result from changes in synchronization within a neural ensemble, also referred to as local synchronization. In addition to local synchronization, oscillatory activity of distant neural structures (single neurons or neural ensembles) can synchronize. Neural oscillations and synchronization have been linked to many cognitive functions such as information transfer, perception, motor control and memory. [5] [6] [7]

Neural oscillations have been most widely studied in neural activity generated by large groups of neurons. Large-scale activity can be measured by techniques such as EEG. In general, EEG signals have a broad spectral content similar to pink noise, but also reveal oscillatory activity in specific frequency bands. The first discovered and best-known frequency band is alpha activity (8–12 Hz) [8] that can be detected from the occipital lobe during relaxed wakefulness and which increases when the eyes are closed. [9] Other frequency bands are: delta (1–4 Hz), theta (4–8 Hz), beta (13–30 Hz), low gamma (30–70 Hz), and high gamma (70–150 Hz) frequency bands, where faster rhythms such as gamma activity have been linked to cognitive processing. Indeed, EEG signals change dramatically during sleep and show a transition from faster frequencies to increasingly slower frequencies such as alpha waves. In fact, different sleep stages are commonly characterized by their spectral content. [10] Consequently, neural oscillations have been linked to cognitive states, such as awareness and consciousness. [11] [12]

Although neural oscillations in human brain activity are mostly investigated using EEG recordings, they are also observed using more invasive recording techniques such as single-unit recordings. Neurons can generate rhythmic patterns of action potentials or spikes. Some types of neurons have the tendency to fire at particular frequencies, so-called resonators. [13] Bursting is another form of rhythmic spiking. Spiking patterns are considered fundamental for information coding in the brain. Oscillatory activity can also be observed in the form of subthreshold membrane potential oscillations (i.e. in the absence of action potentials). [14] If numerous neurons spike in synchrony, they can give rise to oscillations in local field potentials. Quantitative models can estimate the strength of neural oscillations in recorded data. [15]

Neural oscillations are commonly studied from a mathematical framework and belong to the field of "neurodynamics", an area of research in the cognitive sciences that places a strong focus upon the dynamic character of neural activity in describing brain function. [16] It considers the brain a dynamical system and uses differential equations to describe how neural activity evolves over time. In particular, it aims to relate dynamic patterns of brain activity to cognitive functions such as perception and memory. In very abstract form, neural oscillations can be analyzed analytically. When studied in a more physiologically realistic setting, oscillatory activity is generally studied using computer simulations of a computational model.

The functions of neural oscillations are wide-ranging and vary for different types of oscillatory activity. Examples are the generation of rhythmic activity such as a heartbeat and the neural binding of sensory features in perception, such as the shape and color of an object. Neural oscillations also play an important role in many neurological disorders, such as excessive synchronization during seizure activity in epilepsy or tremor in patients with Parkinson's disease. Oscillatory activity can also be used to control external devices such as a brain–computer interface. [17]

Oscillatory activity is observed throughout the central nervous system at all levels of organization. Three different levels have been widely recognized: the micro-scale (activity of a single neuron), the meso-scale (activity of a local group of neurons) and the macro-scale (activity of different brain regions). [18]

Microscopic Edit

Neurons generate action potentials resulting from changes in the electric membrane potential. Neurons can generate multiple action potentials in sequence forming so-called spike trains. These spike trains are the basis for neural coding and information transfer in the brain. Spike trains can form all kinds of patterns, such as rhythmic spiking and bursting, and often display oscillatory activity. [19] Oscillatory activity in single neurons can also be observed in sub-threshold fluctuations in membrane potential. These rhythmic changes in membrane potential do not reach the critical threshold and therefore do not result in an action potential. They can result from postsynaptic potentials from synchronous inputs or from intrinsic properties of neurons.

Neuronal spiking can be classified by their activity patterns. The excitability of neurons can be subdivided in Class I and II. Class I neurons can generate action potentials with arbitrarily low frequency depending on the input strength, whereas Class II neurons generate action potentials in a certain frequency band, which is relatively insensitive to changes in input strength. [13] Class II neurons are also more prone to display sub-threshold oscillations in membrane potential.

Mesoscopic Edit

A group of neurons can also generate oscillatory activity. Through synaptic interactions, the firing patterns of different neurons may become synchronized and the rhythmic changes in electric potential caused by their action potentials will add up (constructive interference). That is, synchronized firing patterns result in synchronized input into other cortical areas, which gives rise to large-amplitude oscillations of the local field potential. These large-scale oscillations can also be measured outside the scalp using electroencephalography (EEG) and magnetoencephalography (MEG). The electric potentials generated by single neurons are far too small to be picked up outside the scalp, and EEG or MEG activity always reflects the summation of the synchronous activity of thousands or millions of neurons that have similar spatial orientation. [20] Neurons in a neural ensemble rarely all fire at exactly the same moment, i.e. fully synchronized. Instead, the probability of firing is rhythmically modulated such that neurons are more likely to fire at the same time, which gives rise to oscillations in their mean activity (see figure at top of page). As such, the frequency of large-scale oscillations does not need to match the firing pattern of individual neurons. Isolated cortical neurons fire regularly under certain conditions, but in the intact brain cortical cells are bombarded by highly fluctuating synaptic inputs and typically fire seemingly at random. However, if the probability of a large group of neurons is rhythmically modulated at a common frequency, they will generate oscillations in the mean field (see also figure at top of page). [19] Neural ensembles can generate oscillatory activity endogenously through local interactions between excitatory and inhibitory neurons. In particular, inhibitory interneurons play an important role in producing neural ensemble synchrony by generating a narrow window for effective excitation and rhythmically modulating the firing rate of excitatory neurons. [21]

Macroscopic Edit

Neural oscillation can also arise from interactions between different brain areas coupled through the structural connectome. Time delays play an important role here. Because all brain areas are bidirectionally coupled, these connections between brain areas form feedback loops. Positive feedback loops tend to cause oscillatory activity where frequency is inversely related to the delay time. An example of such a feedback loop is the connections between the thalamus and cortex – the thalamocortical radiations. This thalamocortical network is able to generate oscillatory activity known as recurrent thalamo-cortical resonance. [22] The thalamocortical network plays an important role in the generation of alpha activity. [23] [24] In a whole-brain network model with realistic anatomical connectivity and propagation delays between brain areas, oscillations in the beta frequency range emerge from the partial synchronisation of subsets of brain areas oscillating in the gamma-band (generated at the mesoscopic level). [25]

Neuronal properties Edit

Scientists have identified some intrinsic neuronal properties that play an important role in generating membrane potential oscillations. In particular, voltage-gated ion channels are critical in the generation of action potentials. The dynamics of these ion channels have been captured in the well-established Hodgkin–Huxley model that describes how action potentials are initiated and propagated by means of a set of differential equations. Using bifurcation analysis, different oscillatory varieties of these neuronal models can be determined, allowing for the classification of types of neuronal responses. The oscillatory dynamics of neuronal spiking as identified in the Hodgkin–Huxley model closely agree with empirical findings. In addition to periodic spiking, subthreshold membrane potential oscillations, i.e. resonance behavior that does not result in action potentials, may also contribute to oscillatory activity by facilitating synchronous activity of neighboring neurons. [26] [27] Like pacemaker neurons in central pattern generators, subtypes of cortical cells fire bursts of spikes (brief clusters of spikes) rhythmically at preferred frequencies. Bursting neurons have the potential to serve as pacemakers for synchronous network oscillations, and bursts of spikes may underlie or enhance neuronal resonance. [19]

Network properties Edit

Apart from intrinsic properties of neurons, biological neural network properties are also an important source of oscillatory activity. Neurons communicate with one another via synapses and affect the timing of spike trains in the post-synaptic neurons. Depending on the properties of the connection, such as the coupling strength, time delay and whether coupling is excitatory or inhibitory, the spike trains of the interacting neurons may become synchronized. [28] Neurons are locally connected, forming small clusters that are called neural ensembles. Certain network structures promote oscillatory activity at specific frequencies. For example, neuronal activity generated by two populations of interconnected inhibitory and excitatory cells can show spontaneous oscillations that are described by the Wilson-Cowan model.

If a group of neurons engages in synchronized oscillatory activity, the neural ensemble can be mathematically represented as a single oscillator. [18] Different neural ensembles are coupled through long-range connections and form a network of weakly coupled oscillators at the next spatial scale. Weakly coupled oscillators can generate a range of dynamics including oscillatory activity. [29] Long-range connections between different brain structures, such as the thalamus and the cortex (see thalamocortical oscillation), involve time-delays due to the finite conduction velocity of axons. Because most connections are reciprocal, they form feed-back loops that support oscillatory activity. Oscillations recorded from multiple cortical areas can become synchronized to form large scale brain networks, whose dynamics and functional connectivity can be studied by means of spectral analysis and Granger causality measures. [30] Coherent activity of large-scale brain activity may form dynamic links between brain areas required for the integration of distributed information. [12]

Neuromodulation Edit

In addition to fast direct synaptic interactions between neurons forming a network, oscillatory activity is regulated by neuromodulators on a much slower time scale. That is, the concentration levels of certain neurotransmitters are known to regulate the amount of oscillatory activity. For instance, GABA concentration has been shown to be positively correlated with frequency of oscillations in induced stimuli. [31] A number of nuclei in the brainstem have diffuse projections throughout the brain influencing concentration levels of neurotransmitters such as norepinephrine, acetylcholine and serotonin. These neurotransmitter systems affect the physiological state, e.g., wakefulness or arousal, and have a pronounced effect on amplitude of different brain waves, such as alpha activity. [32]

Oscillations can often be described and analyzed using mathematics. Mathematicians have identified several dynamical mechanisms that generate rhythmicity. Among the most important are harmonic (linear) oscillators, limit cycle oscillators, and delayed-feedback oscillators. [33] Harmonic oscillations appear very frequently in nature—examples are sound waves, the motion of a pendulum, and vibrations of every sort. They generally arise when a physical system is perturbed by a small degree from a minimum-energy state, and are well understood mathematically. Noise-driven harmonic oscillators realistically simulate alpha rhythm in the waking EEG as well as slow waves and spindles in the sleep EEG. Successful EEG analysis algorithms were based on such models. Several other EEG components are better described by limit-cycle or delayed-feedback oscillations. Limit-cycle oscillations arise from physical systems that show large deviations from equilibrium, whereas delayed-feedback oscillations arise when components of a system affect each other after significant time delays. Limit-cycle oscillations can be complex but there are powerful mathematical tools for analyzing them the mathematics of delayed-feedback oscillations is primitive in comparison. Linear oscillators and limit-cycle oscillators qualitatively differ in terms of how they respond to fluctuations in input. In a linear oscillator, the frequency is more or less constant but the amplitude can vary greatly. In a limit-cycle oscillator, the amplitude tends to be more or less constant but the frequency can vary greatly. A heartbeat is an example of a limit-cycle oscillation in that the frequency of beats varies widely, while each individual beat continues to pump about the same amount of blood.

Computational models adopt a variety of abstractions in order to describe complex oscillatory dynamics observed in brain activity. Many models are used in the field, each defined at a different level of abstraction and trying to model different aspects of neural systems. They range from models of the short-term behaviour of individual neurons, through models of how the dynamics of neural circuitry arise from interactions between individual neurons, to models of how behaviour can arise from abstract neural modules that represent complete subsystems.

Single neuron model Edit

A model of a biological neuron is a mathematical description of the properties of nerve cells, or neurons, that is designed to accurately describe and predict its biological processes. The most successful and widely used model of neurons, the Hodgkin–Huxley model, is based on data from the squid giant axon. It is a set of nonlinear ordinary differential equations that approximates the electrical characteristics of a neuron, in particular the generation and propagation of action potentials. The model is very accurate and detailed and Hodgkin and Huxley received the 1963 Nobel Prize in physiology or medicine for this work.

The mathematics of the Hodgkin–Huxley model are quite complicated and several simplifications have been proposed, such as the FitzHugh–Nagumo model, the Hindmarsh–Rose model or the capacitor-switch model [34] as an extension of the integrate-and-fire model. Such models only capture the basic neuronal dynamics, such as rhythmic spiking and bursting, but are more computationally efficient. This allows the simulation of a large number of interconnected neurons that form a neural network.

Spiking model Edit

A neural network model describes a population of physically interconnected neurons or a group of disparate neurons whose inputs or signalling targets define a recognizable circuit. These models aim to describe how the dynamics of neural circuitry arise from interactions between individual neurons. Local interactions between neurons can result in the synchronization of spiking activity and form the basis of oscillatory activity. In particular, models of interacting pyramidal cells and inhibitory interneurons have been shown to generate brain rhythms such as gamma activity. [35] Similarly, it was shown that simulations of neural networks with a phenomenological model for neuronal response failures can predict spontaneous broadband neural oscillations. [36]

Neural mass model Edit

Neural field models are another important tool in studying neural oscillations and are a mathematical framework describing evolution of variables such as mean firing rate in space and time. In modeling the activity of large numbers of neurons, the central idea is to take the density of neurons to the continuum limit, resulting in spatially continuous neural networks. Instead of modelling individual neurons, this approach approximates a group of neurons by its average properties and interactions. It is based on the mean field approach, an area of statistical physics that deals with large-scale systems. Models based on these principles have been used to provide mathematical descriptions of neural oscillations and EEG rhythms. They have for instance been used to investigate visual hallucinations. [38]

Kuramoto model Edit

The Kuramoto model of coupled phase oscillators [39] is one of the most abstract and fundamental models used to investigate neural oscillations and synchronization. It captures the activity of a local system (e.g., a single neuron or neural ensemble) by its circular phase alone and hence ignores the amplitude of oscillations (amplitude is constant). [40] Interactions amongst these oscillators are introduced by a simple algebraic form (such as a sine function) and collectively generate a dynamical pattern at the global scale. The Kuramoto model is widely used to study oscillatory brain activity and several extensions have been proposed that increase its neurobiological plausibility, for instance by incorporating topological properties of local cortical connectivity. [41] In particular, it describes how the activity of a group of interacting neurons can become synchronized and generate large-scale oscillations. Simulations using the Kuramoto model with realistic long-range cortical connectivity and time-delayed interactions reveal the emergence of slow patterned fluctuations that reproduce resting-state BOLD functional maps, which can be measured using fMRI. [42]

Both single neurons and groups of neurons can generate oscillatory activity spontaneously. In addition, they may show oscillatory responses to perceptual input or motor output. Some types of neurons will fire rhythmically in the absence of any synaptic input. Likewise, brain-wide activity reveals oscillatory activity while subjects do not engage in any activity, so-called resting-state activity. These ongoing rhythms can change in different ways in response to perceptual input or motor output. Oscillatory activity may respond by increases or decreases in frequency and amplitude or show a temporary interruption, which is referred to as phase resetting. In addition, external activity may not interact with ongoing activity at all, resulting in an additive response.

The frequency of ongoing oscillatory activity is increased between t1 and t2.

The amplitude of ongoing oscillatory activity is increased between t1 and t2.

The phase of ongoing oscillatory activity is reset at t1.

Activity is linearly added to ongoing oscillatory activity between t1 and t2.

Ongoing activity Edit

Spontaneous activity is brain activity in the absence of an explicit task, such as sensory input or motor output, and hence also referred to as resting-state activity. It is opposed to induced activity, i.e. brain activity that is induced by sensory stimuli or motor responses. The term ongoing brain activity is used in electroencephalography and magnetoencephalography for those signal components that are not associated with the processing of a stimulus or the occurrence of specific other events, such as moving a body part, i.e. events that do not form evoked potentials/evoked fields, or induced activity. Spontaneous activity is usually considered to be noise if one is interested in stimulus processing however, spontaneous activity is considered to play a crucial role during brain development, such as in network formation and synaptogenesis. Spontaneous activity may be informative regarding the current mental state of the person (e.g. wakefulness, alertness) and is often used in sleep research. Certain types of oscillatory activity, such as alpha waves, are part of spontaneous activity. Statistical analysis of power fluctuations of alpha activity reveals a bimodal distribution, i.e. a high- and low-amplitude mode, and hence shows that resting-state activity does not just reflect a noise process. [43] In case of fMRI, spontaneous fluctuations in the blood-oxygen-level dependent (BOLD) signal reveal correlation patterns that are linked to resting states networks, such as the default network. [44] The temporal evolution of resting state networks is correlated with fluctuations of oscillatory EEG activity in different frequency bands. [45]

Ongoing brain activity may also have an important role in perception, as it may interact with activity related to incoming stimuli. Indeed, EEG studies suggest that visual perception is dependent on both the phase and amplitude of cortical oscillations. For instance, the amplitude and phase of alpha activity at the moment of visual stimulation predicts whether a weak stimulus will be perceived by the subject. [46] [47] [48]

Frequency response Edit

In response to input, a neuron or neuronal ensemble may change the frequency at which it oscillates, thus changing the rate at which it spikes. Often, a neuron's firing rate depends on the summed activity it receives. Frequency changes are also commonly observed in central pattern generators and directly relate to the speed of motor activities, such as step frequency in walking. However, changes in relative oscillation frequency between different brain areas is not so common because the frequency of oscillatory activity is often related to the time delays between brain areas.

Amplitude response Edit

Next to evoked activity, neural activity related to stimulus processing may result in induced activity. Induced activity refers to modulation in ongoing brain activity induced by processing of stimuli or movement preparation. Hence, they reflect an indirect response in contrast to evoked responses. A well-studied type of induced activity is amplitude change in oscillatory activity. For instance, gamma activity often increases during increased mental activity such as during object representation. [49] Because induced responses may have different phases across measurements and therefore would cancel out during averaging, they can only be obtained using time-frequency analysis. Induced activity generally reflects the activity of numerous neurons: amplitude changes in oscillatory activity are thought to arise from the synchronization of neural activity, for instance by synchronization of spike timing or membrane potential fluctuations of individual neurons. Increases in oscillatory activity are therefore often referred to as event-related synchronization, while decreases are referred to as event-related desynchronization. [50]

Phase resetting Edit

Phase resetting occurs when input to a neuron or neuronal ensemble resets the phase of ongoing oscillations. [51] It is very common in single neurons where spike timing is adjusted to neuronal input (a neuron may spike at a fixed delay in response to periodic input, which is referred to as phase locking [13] ) and may also occur in neuronal ensembles when the phases of their neurons are adjusted simultaneously. Phase resetting is fundamental for the synchronization of different neurons or different brain regions [12] [29] because the timing of spikes can become phase locked to the activity of other neurons.

Phase resetting also permits the study of evoked activity, a term used in electroencephalography and magnetoencephalography for responses in brain activity that are directly related to stimulus-related activity. Evoked potentials and event-related potentials are obtained from an electroencephalogram by stimulus-locked averaging, i.e. averaging different trials at fixed latencies around the presentation of a stimulus. As a consequence, those signal components that are the same in each single measurement are conserved and all others, i.e. ongoing or spontaneous activity, are averaged out. That is, event-related potentials only reflect oscillations in brain activity that are phase-locked to the stimulus or event. Evoked activity is often considered to be independent from ongoing brain activity, although this is an ongoing debate. [52] [53]

Asymmetric amplitude modulation Edit

It has recently been proposed that even if phases are not aligned across trials, induced activity may still cause event-related potentials because ongoing brain oscillations may not be symmetric and thus amplitude modulations may result in a baseline shift that does not average out. [54] [55] This model implies that slow event-related responses, such as asymmetric alpha activity, could result from asymmetric brain oscillation amplitude modulations, such as an asymmetry of the intracellular currents that propagate forward and backward down the dendrites. [56] Under this assumption, asymmetries in the dendritic current would cause asymmetries in oscillatory activity measured by EEG and MEG, since dendritic currents in pyramidal cells are generally thought to generate EEG and MEG signals that can be measured at the scalp. [57]

Neural synchronization can be modulated by task constraints, such as attention, and is thought to play a role in feature binding, [58] neuronal communication, [5] and motor coordination. [7] Neuronal oscillations became a hot topic in neuroscience in the 1990s when the studies of the visual system of the brain by Gray, Singer and others appeared to support the neural binding hypothesis. [59] According to this idea, synchronous oscillations in neuronal ensembles bind neurons representing different features of an object. For example, when a person looks at a tree, visual cortex neurons representing the tree trunk and those representing the branches of the same tree would oscillate in synchrony to form a single representation of the tree. This phenomenon is best seen in local field potentials which reflect the synchronous activity of local groups of neurons, but has also been shown in EEG and MEG recordings providing increasing evidence for a close relation between synchronous oscillatory activity and a variety of cognitive functions such as perceptual grouping. [58]

Pacemaker Edit

Cells in the sinoatrial node, located in the right atrium of the heart, spontaneously depolarize approximately 100 times per minute. Although all of the heart's cells have the ability to generate action potentials that trigger cardiac contraction, the sinoatrial node normally initiates it, simply because it generates impulses slightly faster than the other areas. Hence, these cells generate the normal sinus rhythm and are called pacemaker cells as they directly control the heart rate. In the absence of extrinsic neural and hormonal control, cells in the SA node will rhythmically discharge. The sinoatrial node is richly innervated by the autonomic nervous system, which up or down regulates the spontaneous firing frequency of the pacemaker cells.

Central pattern generator Edit

Synchronized firing of neurons also forms the basis of periodic motor commands for rhythmic movements. These rhythmic outputs are produced by a group of interacting neurons that form a network, called a central pattern generator. Central pattern generators are neuronal circuits that—when activated—can produce rhythmic motor patterns in the absence of sensory or descending inputs that carry specific timing information. Examples are walking, breathing, and swimming, [60] Most evidence for central pattern generators comes from lower animals, such as the lamprey, but there is also evidence for spinal central pattern generators in humans. [61] [62]

Information processing Edit

Neuronal spiking is generally considered the basis for information transfer in the brain. For such a transfer, information needs to be coded in a spiking pattern. Different types of coding schemes have been proposed, such as rate coding and temporal coding. Neural oscillations could create periodic time windows in which input spikes have larger effect on neurons, thereby providing a mechanism for decoding temporal codes. [63]

Perception Edit

Synchronization of neuronal firing may serve as a means to group spatially segregated neurons that respond to the same stimulus in order to bind these responses for further joint processing, i.e. to exploit temporal synchrony to encode relations. Purely theoretical formulations of the binding-by-synchrony hypothesis were proposed first, [64] but subsequently extensive experimental evidence has been reported supporting the potential role of synchrony as a relational code. [65]

The functional role of synchronized oscillatory activity in the brain was mainly established in experiments performed on awake kittens with multiple electrodes implanted in the visual cortex. These experiments showed that groups of spatially segregated neurons engage in synchronous oscillatory activity when activated by visual stimuli. The frequency of these oscillations was in the range of 40 Hz and differed from the periodic activation induced by the grating, suggesting that the oscillations and their synchronization were due to internal neuronal interactions. [65] Similar findings were shown in parallel by the group of Eckhorn, providing further evidence for the functional role of neural synchronization in feature binding. [66] Since then, numerous studies have replicated these findings and extended them to different modalities such as EEG, providing extensive evidence of the functional role of gamma oscillations in visual perception.

Gilles Laurent and colleagues showed that oscillatory synchronization has an important functional role in odor perception. Perceiving different odors leads to different subsets of neurons firing on different sets of oscillatory cycles. [67] These oscillations can be disrupted by GABA blocker picrotoxin, [68] and the disruption of the oscillatory synchronization leads to impairment of behavioral discrimination of chemically similar odorants in bees [69] and to more similar responses across odors in downstream β-lobe neurons. [70] Recent follow-up of this work has shown that oscillations create periodic integration windows for Kenyon cells in the insect mushroom body, such that incoming spikes from the antennal lobe are more effective in activating Kenyon cells only at specific phases of the oscillatory cycle. [63]

Neural oscillations are also thought be involved in the sense of time [71] and in somatosensory perception. [72] However, recent findings argue against a clock-like function of cortical gamma oscillations. [73]

Motor coordination Edit

Oscillations have been commonly reported in the motor system. Pfurtscheller and colleagues found a reduction in alpha (8–12 Hz) and beta (13–30 Hz) oscillations in EEG activity when subjects made a movement. [50] [74] Using intra-cortical recordings, similar changes in oscillatory activity were found in the motor cortex when the monkeys performed motor acts that required significant attention. [75] [76] In addition, oscillations at spinal level become synchronised to beta oscillations in the motor cortex during constant muscle activation, as determined by cortico-muscular coherence. [77] [78] [79] Likewise, muscle activity of different muscles reveals inter-muscular coherence at multiple distinct frequencies reflecting the underlying neural circuitry involved in motor coordination. [80] [81]

Recently it was found that cortical oscillations propagate as travelling waves across the surface of the motor cortex along dominant spatial axes characteristic of the local circuitry of the motor cortex. [82] It has been proposed that motor commands in the form of travelling waves can be spatially filtered by the descending fibres to selectively control muscle force. [83] Simulations have shown that ongoing wave activity in cortex can elicit steady muscle force with physiological levels of EEG-EMG coherence. [84]

Oscillatory rhythms at 10 Hz have been recorded in a brain area called the inferior olive, which is associated with the cerebellum. [14] These oscillations are also observed in motor output of physiological tremor [85] and when performing slow finger movements. [86] These findings may indicate that the human brain controls continuous movements intermittently. In support, it was shown that these movement discontinuities are directly correlated to oscillatory activity in a cerebello-thalamo-cortical loop, which may represent a neural mechanism for the intermittent motor control. [87]

Memory Edit

Neural oscillations, in particular theta activity, are extensively linked to memory function. Theta rhythms are very strong in rodent hippocampi and entorhinal cortex during learning and memory retrieval, and they are believed to be vital to the induction of long-term potentiation, a potential cellular mechanism for learning and memory. Coupling between theta and gamma activity is thought to be vital for memory functions, including episodic memory. [88] [89] Tight coordination of single-neuron spikes with local theta oscillations is linked to successful memory formation in humans, as more stereotyped spiking predicts better memory. [90]

Sleep and consciousness Edit

Sleep is a naturally recurring state characterized by reduced or absent consciousness and proceeds in cycles of rapid eye movement (REM) and non-rapid eye movement (NREM) sleep. Sleep stages are characterized by spectral content of EEG: for instance, stage N1 refers to the transition of the brain from alpha waves (common in the awake state) to theta waves, whereas stage N3 (deep or slow-wave sleep) is characterized by the presence of delta waves. The normal order of sleep stages is N1 → N2 → N3 → N2 → REM. [ citation needed ]

Development Edit

Neural oscillations may play a role in neural development. For example, retinal waves are thought to have properties that define early connectivity of circuits and synapses between cells in the retina. [91]

Specific types of neural oscillations may also appear in pathological situations, such as Parkinson's disease or epilepsy. These pathological oscillations often consist of an aberrant version of a normal oscillation. For example, one of the best known types is the spike and wave oscillation, which is typical of generalized or absence epileptic seizures, and which resembles normal sleep spindle oscillations.

Tremor Edit

A tremor is an involuntary, somewhat rhythmic, muscle contraction and relaxation involving to-and-fro movements of one or more body parts. It is the most common of all involuntary movements and can affect the hands, arms, eyes, face, head, vocal cords, trunk, and legs. Most tremors occur in the hands. In some people, tremor is a symptom of another neurological disorder. Many different forms of tremor have been identified, such as essential tremor or Parkinsonian tremor. It is argued that tremors are likely to be multifactorial in origin, with contributions from neural oscillations in the central nervous systems, but also from peripheral mechanisms such as reflex loop resonances. [92]

Epilepsy Edit

Epilepsy is a common chronic neurological disorder characterized by seizures. These seizures are transient signs and/or symptoms of abnormal, excessive or hypersynchronous neuronal activity in the brain. [93]

Thalamocortical dysrhythmia Edit

In thalamocortical dysrhythmia (TCD), normal thalamocortical resonance is disrupted. The thalamic loss of input allows the frequency of the thalamo-cortical column to slow into the theta or delta band as identified by MEG and EEG by machine learning. [94] TCD can be treated with neurosurgical methods like thalamotomy.

Clinical endpoints Edit

Neural oscillations are sensitive to several drugs influencing brain activity accordingly, biomarkers based on neural oscillations are emerging as secondary endpoints in clinical trials and in quantifying effects in pre-clinical studies. These biomarkers are often named "EEG biomarkers" or "Neurophysiological Biomarkers" and are quantified using quantitative electroencephalography (qEEG). EEG biomarkers can be extracted from the EEG using the open-source Neurophysiological Biomarker Toolbox.

Brain–computer interface Edit

Neural oscillation has been applied as a control signal in various brain–computer interfaces (BCIs). [95] For example, a non-invasive BCI can be created by placing electrodes on the scalp and then measuring the weak electric signals. Although individual neuron activities cannot be recorded through non-invasive BCI because the skull damps and blurs the electromagnetic signals, oscillatory activity can still be reliably detected. The BCI was introduced by Vidal in 1973 [96] as challenge of using EEG signals to control objects outside human body.

After the BCI challenge, in 1988, alpha rhythm was used in a brain rhythm based BCI for control of a physical object, a robot. [97] [98] Alpha rhythm based BCI was the first BCI for control of a robot. [99] [100] In particular, some forms of BCI allow users to control a device by measuring the amplitude of oscillatory activity in specific frequency bands, including mu and beta rhythms.

A non-inclusive list of types of oscillatory activity found in the central nervous system:


Spontaneous electrical low-frequency oscillations: a possible role in Hydra and all living systems

As one of the first model systems in biology, the basal metazoan Hydra has been revealing fundamental features of living systems since it was first discovered by Antonie van Leeuwenhoek in the early eighteenth century. While it has become well-established within cell and developmental biology, this tiny freshwater polyp is only now being re-introduced to modern neuroscience where it has already produced a curious finding: the presence of low-frequency spontaneous neural oscillations at the same frequency as those found in the default mode network in the human brain. Surprisingly, increasing evidence suggests such spontaneous electrical low-frequency oscillations (SELFOs) are found across the wide diversity of life on Earth, from bacteria to humans. This paper reviews the evidence for SELFOs in diverse phyla, beginning with the importance of their discovery in Hydra, and hypothesizes a potential role as electrical organism organizers, which supports a growing literature on the role of bioelectricity as a ‘template’ for developmental memory in organism regeneration.

This article is part of the theme issue ‘Basal cognition: conceptual tools and the view from the single cell’.

1. Introduction

Hydra, the small freshwater cnidarian polyp, has served as a fruitful model organism for numerous cell and developmental biological studies since its discovery over 300 years ago. Recently, Hydra has been revived as a model system in neuroscience, where its seemingly ‘simple’ nerve net is not only illuminating the activity of behaviour-generating neuronal circuits with dramatic whole-body in vivo imaging [1], but has also revealed an intriguing phenomenon [1]: spontaneous electrical low-frequency oscillations (SELFOs) at the same frequency as those found in the default mode network (DMN) in human brains. Here, the term ‘SELFO’ will be used to refer to organism-wide oscillatory electrical activity of low frequency (typically 0.01–0.1 Hz, but the exact frequency is organism-dependent) that is spontaneously produced independent of external stimuli and does not appear to directly generate behaviour. Such mysterious SELFOs were first observed in Hydra and other Cnidaria in the 1960s [2,3], but they were not pursued so their function was not deciphered. These SELFOs were rediscovered in the more recent Hydra work, but were similarly set aside in pursuit of behaviour-generating networks, so their function remains unknown [1]. In the 1990s, SELFOs were unexpectedly detected in the human brain with the discovery of the DMN, which has become widely studied and hypothesized to play a role in ‘resting-state’ mental processes, such as spontaneous thought, episodic memory, mind-wandering and self-related processing [4].

Surprisingly, increasing evidence suggests SELFOs are found throughout the living world—including in non-neuronal organisms such as plants [5,6], fungi [7–9], protozoa [10] and bacteria [11–13]—which may point to a fundamental biological function that evolved early in the development of life on Earth. The claim defended here is that SELFOs may have a potential role as electrical organism organizers, serving as system-wide integrators and communicators, making them critical for the construction and maintenance of organism unity and coherent, adaptive behaviour. Such a view is consistent with recent suggestions that bioelectrical phenomena may act as a template for developmental memory, including in regeneration, for which Hydra has long served as a model organism [14].

The paper is structured as follows. Section 2 presents an overview of Hydra as a model system in cell and developmental biology, focusing on what this early work taught us about how multicellular organisms build their bodies. Section 3 addresses early research into cnidarian neurophysiology and behaviour beginning in the 1870s and culminating in the 1960s, which revealed ubiquitous spontaneous neural activity [2,3]. Section 4 introduces the as-yet poorly understood spontaneous electrical activity in human brains, notably the ‘DMN’, its unexpected discovery, and hypotheses concerning its function [4]. Section 5 explores the evidence for SELFOs in other widely divergent organisms. Section 6 advances a highly preliminary hypothesis about what role SELFOs might be playing in biological systems—as organizers of organism construction and persistence—and how Hydra is an ideal model system to begin rigorously testing this idea and others that ramify from it.

2. Hydra as an early model system

The Dutch microbiological pioneer Antonie van Leeuwenhoek discovered Hydra in 1702. In his letter to the Royal Society of London, he described finding a number of ‘animalcula’ attached to the roots of ‘green weeds’ he had pulled out of a river in what was then called the Low Countries [15]. These particular ‘animalcula’ appeared to contract and elongate, produce ‘young animalcula’ from their sides, and draw small ‘wheels’ in and out of their bodies. Van Leeuwenhoek included drawings of these organisms along with his letter, but that was the extent of his investigation.

Nearly 40 years later, a Dutch tutor, Abraham Trembley, unaware of van Leeuwenhoek's earlier discovery, collected specimens from a nearby pond and rediscovered a small green polyp. Unlike van Leeuwenhoek, who surveyed many organisms, which happened to include Hydra, Trembley took a deep dive into Hydra biology and became fascinated with determining whether the organism was an animal or a plant. Plants were then known to regenerate, while animals were not. To settle the issue, Trembley cut the polyp in half. To his surprise, the polyp regenerated its entire body, which suggested to him it was a plant. However, the polyp could also move in complicated ways—including capturing prey, feeding itself using its tentacles and doing ‘somersaults’—abilities classically only associated with animals. Following a series of meticulous experiments in which he both observed and recorded the various behaviours of the polyps and the myriad ways they were able to regenerate themselves from fragments of tissue, Trembley concluded this category-defying polyp was, indeed, an animal that could regenerate itself, just like a plant. He published his landmark results in a series of letters to the Royal Society from 1742 to 1746 [16–19] and in a book cataloguing his studies in 1744 [20], which, together, sparked great interest in the phenomenon of animal regeneration and launched the field of Hydra biology.

Since Trembley's initial experiments nearly 280 years ago, Hydra has served as an extremely useful model organism for studying a wide variety of biological processes, including: ageing, regeneration, pattern formation, and stem cell maintenance and differentiation [21]. One of the major early discoveries in Hydra came in 1909 when Ethel Browne, a graduate student working alongside T. H. Morgan and E. B. Wilson, demonstrated that excising a piece of tissue from the sub-tentacle region of one animal and grafting it onto the body column of another induced the formation of a second body axis—a second head—at the implantation site [22]. As had Trembley, Browne carried out a series of careful grafting experiments that showed this novel property of ‘induction’, resulting in a second body axis, was reproducible and specific to tissue in the sub-tentacle region. Fifteen years later, Hans Spemann and Hilde Mangold performed nearly the same experiment and demonstrated the same effect using amphibian embryos [23]. They dubbed this special piece of ‘inducing’ tissue the ‘head organizer’, for which Spemann received the Nobel Prize in 1935. Mangold died before the prize was awarded and Browne's original work in Hydra was never acknowledged [24].

Nevertheless, Browne's discoveries in Hydra set the agenda for developmental biological research for years to come in which the primary aim was to determine what made the head organizer tissue so special [25]. What was it about that particular tissue that could induce the formation of another body axis and what were the specific ‘inducing factors’? Numerous tools were developed to identify and localize different molecules and cell types, which led to the finding that the head organizer establishes a gradient of molecules across developing organisms, a kind of ‘molecular map’ cells can ‘read’ to ‘know’ what kind of cell to become and their proper location within the organism [26]. Unexpectedly, these ‘molecular maps’ subsequently were found to be highly conserved among multicellular animals. The same molecules (e.g. Wnt, BMP, Hox) appeared to be used in essentially the same way by all organisms, from Hydra to humans, to establish body-axis polarity—the anterior–posterior and dorsal–ventral poles—as well as tissue types and overall body plan [27]. This demonstrated that studying fundamental biological phenomena in basal metazoans, like Hydra, can illuminate these same processes in more complex animals.

3. Neurophysiology and behaviour research in Cnidaria

In parallel to the primarily developmental studies in Hydra was a lesser-known line of research focused alternatively on behavioural and neurophysiological studies. This line of work began in the 1870s with George Romanes, one of Charles Darwin's disciples working in England, and Theodor Eimer, a zoologist working independently in Germany [28]. Both became fascinated by the complex behaviours of jellyfish, larger Cnidaria related to Hydra, including their ability to move in ‘purposeful’ ways and capture and ingest their prey (as does Hydra), which suggested the presence of a nervous system. Unlike their predecessors—including Louis Agassiz and Ernst Haeckel—who focused almost exclusively on identifying the structure of this presumed nervous system using various histological methods, with equivocal and controversial results, Romanes and Eimer aimed to prove these basal metazoans possessed a nervous system by studying its potential function in coordinating the animal's behaviour. They both made significant progress along these lines, which they published one month apart in 1874 [29,30], but both men died prematurely, ending this line of investigation.

Work on the neurophysiology and behaviour of Cnidaria was revived in the 1930s when Carl Pantin became interested in how the nervous system (then known to be in the form of a diffuse nerve net) controlled the muscles of the sea anemone, Actinia [28]. Pantin, like Romanes and Eimer, made considerable progress in determining how behaviour is coordinated in this system, which he summarized in his 1952 Croonian Lecture [31], but he, like his predecessors, was limited by the tools of his time. True neurophysiology did not begin in earnest until the 1950s, when the advent of both electron microscopy and electrophysiology enabled more sophisticated studies of the structure and function of cnidarian nervous systems. A major breakthrough in electron microscopy was identification of synapses with dense core vesicles in jellyfish neurons, confirming basal metazoans possess neurons with structures very similar to neurons in more complex organisms, such as mammals [32]. Concurrently, significant advances were being made into the electrical properties of these early nervous systems. A newly developed microelectrode inserted into the extracellular space adjacent to a jellyfish neuron enabled Horridge to record the first action potential in a cnidarian in 1953 [33]. This work inspired others to use microelectrodes to investigate the electrical properties of other cnidarians, including Hydra [28].

In the 1960s, Passano & McCullough published a series of papers [34–37] summarizing their experimental work on the electrical activity and behaviour of Hydra. Their careful analysis led them to three conclusions. First, Hydra exhibits spontaneous, rhythmic behaviour independent of the surrounding environment, although it can be influenced by external circumstances [35,37]. Second, Hydra possesses a nervous system composed of two ‘pacemaker systems’ that control specific behaviours [34–37]. Finally, these electrical ‘pacemaker systems’ also exhibit spontaneous, rhythmic activity, some of which was not associated with any behaviour, which they termed ‘cryptic’ [34–36]. These same features were found by other investigators of the time in several other cnidarians, suggesting significant conservation of function among these early animals [2,3]. These findings led Passano to propose a model featuring a ‘hierarchy of pacemakers' in which one pacemaker would serve to coordinate all the others to ensure coherent animal behaviour, giving the surprising appearance of certain ‘central nervous system’ features in these seemingly simple, radially symmetric nerve nets [2,3]. This work substantially contributed to understanding how electrical activity in Hydra is related to its behaviour. Nevertheless, given the continuing limitations of the available tools, major questions remained unanswered and the research ground to a halt—until now.

A major goal of modern neuroscientific research is to record the activity of all neurons in a behaving animal at single-neuron resolution to enable visualization of emergent phenomena of the whole system that would otherwise be missed when recording only one neuron at a time, as is the case when using microelectrodes [38]. The development of fluorescent genetically encoded calcium indicators (GECIs) in the early 2000s allowed all-optical imaging of previously inaccessible nervous systems [39]. With its small size, transparent body, and diffuse nerve net lacking any well-defined brain or ganglia (figure 1a), Hydra has proved to be an ideal model system for optical imaging of all of its neurons at single-cell resolution at the same time [40]. This feat was accomplished in 2017, when the activity of nearly all neurons in Hydra were imaged simultaneously using a transgenic animal expressing the GECI GCaMP6s in its neurons [1]. This work revealed two fundamental features of the Hydra nervous system that are mostly consistent with Passano & McCullough's earlier work. First, the Hydra nervous system is composed of three major neural networks (or ‘ensembles’). Second, it is spontaneously active (see figure 1b for details).

Figure 1. The Hydra nerve net and its proposed functions. (a) The Hydra nerve net is visualized by labelling neurons with GFP and imaging with a spinning disc confocal microscope. (b) The Hydra nerve net is composed of three major proposed behaviour-generating networks: the contraction burst network correlated with longitudinal contraction, the rhythmic potential 1 (RP1) network correlated with elongation, and the rhythmic potential 2 (RP2) network correlated with radial contraction. In addition to these proposed behaviour-generating networks, the RP1 network is also active during Hydra ‘rest’ when the animal is kept under constant external conditions and exhibits no observable behaviour [1,34].

The discovery of three major functional ensembles within the Hydra nervous system sheds light on a long-standing question in neuroscience: what is the ‘fundamental unit’ of the nervous system? Originally hypothesized to be a continuous ‘reticular meshwork’ functioning as a single unit [41], Ramon y Cajal and Sherrington transformed understanding of the nervous system with the ‘neuron doctrine’—the idea based on Schleiden & Schwann's ‘Cell Theory’ [42] that individual nerve cells are the fundamental units of nervous systems. The neuron doctrine remains neuroscientific orthodoxy, although not without escalating challenge. Today, growing evidence suggests function arises at the level of groups or ensembles of neurons [43,44], somewhere between a continuous meshwork and independent units. That Hydra, with one of the earliest nervous systems, a seemingly ‘simple’ nerve net, carved itself into three such functional ensembles, supports this idea. Once again, this basal metazoan appears to be teaching us something fundamental about biology in this case, neurobiology.

The rediscovered second feature of the Hydra nervous system, its spontaneous activity, arguably has the potential to lead to similarly revolutionary changes in neurobiology. Sherrington's still-influential proposal of the nervous system as effectively a ‘reflex organ’ waiting for environmental stimuli to push the organism to behavioural response [45]—the foundational proposition of the input–output view of information processing [46]—cannot account for spontaneous neural activity that seemingly has no effect on behaviour. As we have seen, however, such ‘cryptic’, non-behaviour-inducing spontaneous activity has been recognized in Cnidaria for more than half a century [2,3]. Although speculated to play a role in coordinating animal behaviour at the time, the function of this activity was left to ‘future work’, which was never done. While poorly understood, findings across diverse Cnidaria were essentially the same: endogenously active nervous systems produced rhythmic, low-frequency pulses even in an unchanging environment and even when organisms were at rest. Why? Why would energetically expensive [47] nervous systems be perpetually active in the absence of a stimulus and in the absence of any discernible behaviour? A clue about the potential role of this low-frequency spontaneous neural activity in comparatively simple organisms comes from an unexpected place: the human brain.

4. The default mode network: low-frequency neural oscillations in humans

Even before the demonstration of spontaneous activity in cnidarian nervous systems, Hans Berger used his newly invented electroencephalogram to discover spontaneous, rhythmic electrical activity in human brains in 1929, which he termed ‘alpha waves’ [48]. Despite Berger's intriguing early findings, spontaneous brain activity was mostly ignored in favour of the prevailing view of the brain as an input–output machine, active only in response to external stimuli [45]. Almost seven decades passed before neuroscientists Shulman and Raichle independently noticed a paradoxical result while performing human neuroimaging studies designed to detect ‘task-evoked’ activity. A specific brain network appeared to be specifically inhibited during tasks and more active while subjects were ‘at rest’ with their eyes closed [49,50]. A series of studies verified the presence of intrinsic brain activity in the absence of changing external conditions or goal-directed behaviour and forced Shulman and Raichle to conclude the conventional belief—that only external stimuli generate brain activity—is seriously flawed. This spontaneous, resting-state network became known as the brain's ‘DMN’ (figure 2a), which quickly became an area of intense investigation [4].

Figure 2. SELFOs in humans and Hydra. (a) Spontaneous electrical activity in the human ‘DMN’ in a representative subject ‘at rest’ as measured by functional magnetic resonance imaging (fMRI) (left) with its associated time course showing low-frequency oscillations (middle), which are proposed to play a role in the functions listed (right). Left and middle panels adapted from fig. 1a in [51] (copyright 2008 National Academy of Sciences, USA). (b) Spontaneous electrical activity in the Hydra RP1 network as visualized in Hydra ‘at rest’ expressing GCaMP6s in its neurons (left) with a representative time course measured in earlier work with extracellular electrodes showing its low-frequency oscillations (middle) of unknown function. A and A′ indicate asymmetrical epidermal muscle contractions correlated with an electrical potential distinct from rhythmic potential (RP). RP pulses are denoted by black dots and resulted in no observable behaviour. Middle panel adapted from Fig. 1 in [34].

While much work has been done on the human DMN since its discovery 20 years ago, its function remains debated. It is believed to be involved primarily in ‘resting-state’ mental processes, such as spontaneous thought, episodic memory, mind-wandering, and self-related processing [4]. Numerous studies have shown significant overlap between resting-state neural activity in cortical midline structures thought to compose the DMN and those active during self-related processing [52,53]. These findings have been replicated using a variety of self-specific versus non-self-specific stimuli in multiple domains, including facial, emotional, verbal, spatial, motor and memory, in which subjects routinely respond more robustly to self-specific versus non-self-specific stimuli [52,53]. In each domain studied, the same cortical midline structures active in the DMN at rest were also activated during self-specific stimulus processing during testing, leading Northoff to propose the DMN might contain, or encode, self-specific information [54]. In addition to these findings, mounting evidence shows disruption of DMN activity via psychedelics or meditation correlates with ‘ego dissolution’, or the loss of a sense of self, consistent with the idea that the DMN might play an important role in the formation of the self in humans [55–60]. However, precisely what the self is and how the DMN might contribute to it, remains obscure.

A clue to how the DMN might contribute to the human self may come from what has been learned about spontaneous brain activity in general in recent years. We now know the human brain produces numerous spontaneous neural oscillations spanning a wide range of frequencies from ultraslow (0.01–1.0 Hz) to ultrafast (200–600 Hz) [61,62]. The same neural oscillation frequency distribution found in humans has been identified in all mammalian brains studied to date such a robust frequency structure is one of the most highly conserved features of mammalian brains [61]. We also know intrinsic brain activity consumes up to 20% of total body energy, so it cannot be mere ‘noise’, as had been assumed for most of the twentieth century [63]. These two facts—a highly conserved structure and a high energetic cost—suggest that spontaneous brain activity is likely critical for brain function [61,63], although for what is unclear. One proposal, by Buzsáki, envisages a hierarchy of integrating oscillators that form the functional or ‘syntactical’ units of the elusive ‘neural code’ where faster, smaller and more local oscillations become entrained, integrated, or ‘read’ by slower, larger and more global oscillations [64]. The highest-frequency neural oscillations function as the ‘letters’ of the code, which are integrated or ‘read’ by lower-frequency oscillations that form ‘words’, which are integrated or ‘read’ into ‘sentences’ by the next lowest-frequency level, and so on. Although Buzsáki does not explicitly say so, the theory implicitly assumes the presence of an ultimate downstream integrator at the lowest frequency level, which ‘reads’ all the higher-frequency information. Interestingly, the DMN has been found to oscillate in such an ultraslow range (0.01–0.1 Hz) (figure 2a) [51,65–67], making it a potential candidate for an ultimate brain integrator.

In addition to its frequency, the structure of the DMN may also provide clues to its function. The overall ‘small-world’ network architecture of the brain is composed of many short, local connections and few long-range connections between nodes. In this picture, the DMN appears to serve as one of the brain's main integrators that connects major connection-rich ‘rich hubs’ via long-range, thickly myelinated axons [68–71]. This network architecture puts the DMN in a central position in the brain (figure 2a), in which it both receives and sends information rapidly among otherwise segregated local brain regions. It is believed the DMN receives exteroceptive input from all of the primary sensory areas as well as interoceptive input from the insula, thalamus, hypothalamus, midbrain and brainstem, and, in turn, can rapidly send information back to and between these same areas [4,52,72,73]. Thus, in addition to oscillating at the lowest frequency in the brain, the DMN seems to also be in a unique structural position to act as the ultimate downstream integrator, as implicitly predicted by Buzsáki's theory [64].

Another way to think about the potential role of the DMN in human self-construction is as the top layer of the hierarchical predictive coding ‘self-model’ as put forth by Friston [74,75]. Like Buszáki's theory, which predicts the need for an ultimate brain integrator or ‘reader’ (i.e. a ‘self’), a hierarchical predictive coding model also implies the need for an ultimate brain integrator or ‘predictor’ (also a ‘self’) at the top of the hierarchy. According to predictive coding brain models, prediction error is passed up the hierarchy from the low-level primary, unimodal sensory areas to the ultimate, multi-modal ‘predictor’ at the top of the hierarchy, which contains a high-level abstract representation (of the ‘self’) that then passes predictions back down to the lower levels [74,75]. In this way, the DMN, oscillating at the lowest frequency in the brain, might act as the brain's ultimate information integrator, receiving input from all the lower-level, otherwise isolated units (oscillating at higher frequencies), and passing on one unified ‘self’ prediction back down to generate coherent, adaptive behaviour (figure 4c).

Together, these findings suggest the DMN may be implementing a top-down control mechanism in the human brain as it receives bottom-up information from all brain areas (which oscillate at higher frequencies) and may, in turn, constrain these lower levels via its slow-wave oscillations, while also rapidly communicating its unified output to all brain regions via its synchronous electrical activity to maintain organism unity (i.e. a coherent ‘self’). Hence, the human ‘self’ may be constructed bottom-up with the DMN emerging as the ultimate neural integrator and top-down ‘enslaver’ of all the lower levels of organization in the brain. Importantly, this view of the human self does not imply the DMN is the self or that the self is a thing located in the DMN. Rather, it suggests the self is an ongoing process in which the DMN continuously receives internal and external sensory information and adaptively updates its predictive model of itself and the world. Although evidence is accumulating connecting the DMN to the human self, its precise function and mechanism remain unclear and the speculative hypotheses put forth here remain untested owing to the difficulties of both imaging and manipulating human brain activity.

5. SELFOs in the living world

The presence of SELFOs in cnidarians and humans (figure 2) raises a question: are they found elsewhere? As already mentioned, the same cortical oscillation frequency distribution found in human brains has been found in the cortex of all mammalian brains studied thus far, including in chimpanzees, macaques, sheep, baboons, pigs, dogs, cats, rabbits, guinea pigs, rats, hamsters, gerbils, mice and bats [61]. In addition to the low-frequency cortical DMN, there is evidence for subcortical DMN nodes in the midbrain and brainstem that are highly conserved among mammals and co-active with cortical DMN nodes, thus forming a cortical–subcortical DMN [72,73,76]. Using new functional magnetic resonance imaging (fMRI) techniques, SELFOs have recently been observed in human, non-human primate, and rat spinal cords, indicating these oscillations pervade the entire mammalian central nervous system [77–80].

Spontaneous neural activity is not specific to mammals, however. Zebrafish brains generate a wide range of spontaneous oscillation frequencies, including the ultraslow-frequency range (0.01–0.1 Hz) [81], although their function remains mostly unknown. Brain-wide oscillations of a variety of frequencies have also been recorded in a wide range of insects, including moths [82], locusts [83], water beetles [84], honeybees [85] and flies [86]. Although most of this work has been focused on stimulus-evoked activity and higher-frequency oscillations, an ultraslow-frequency (0.01–0.1 Hz) spontaneous network has been identified in flies [87], the function of which remains to be determined. At the base of the metazoan lineage sit Cnidaria, which possess the earliest known nervous systems: radially symmetric nerve nets that appear to universally generate SELFOs of unknown function (figure 2b) [2,3]. Thus, the evidence points to the presence of SELFOs not only in all mammals, but in all animals with a nervous system, despite substantial differences in size and structure.

What about organisms without neuronal wiring? Do they produce similar electrical activity? The answer is resoundingly affirmative (figure 3). Plants have been known to produce neuron-like action potentials for years [89]. However, recent work using new tools (GCaMP3s) in Arabidopsis made this even clearer when calcium-mediated action potentials were observed in response to wounding, which travelled throughout the plant and induced expression of downstream wound-response genes at distant sites [90]. In addition to stimulus-evoked electrical activity, plants also exhibit ongoing SELFOs in the transition zones of their roots (figure 3a), the proposed information ‘integration centre’ for the whole plant [5,6,88]. Accumulating evidence suggests the plant root transition zone may serve as a sensory information integrator and coordinator of motor responses in distant stems and leaves in response to changing conditions (e.g. light, temperature, salt stress or wounding) [6,88]. What role SELFOs might play in this process remains to be determined. Similarly, several multicellular fungi have been found to exhibit spontaneous, low-frequency action potential-like spikes [7–9]. The first low-frequency spontaneous ‘action potentials’ were identified in the mature hyphae of the fungus Neurospora crassa in 1976 using intracellular electrodes—potentials that were conducted organism-wide and had no clear function [8]. Twenty years later, SELFOs were demonstrated in the hyphae of Pleurotus ostreatus and Armillaria bulbosa, the frequency of which increased in the presence of various stimuli (e.g. sulfuric acid, water, malt extract and wood) and decreased when the wood stimulus was removed, leading the authors to speculate such SELFOs may be used for organism-wide communication in response to changing external conditions [9]. More recently, a 2018 study using extracellular electrodes placed in the cap and stalk of the oyster mushroom (Pleurotus djamor) fruit body also revealed SELFOs with no obvious function (figure 3b), although a potential role in organism-wide communication was again proposed [7].

Figure 3. SELFOs in organisms without nervous systems. (a) Electrophysiology of plants (Zea mays) was investigated using a multi-electrode array in plant roots (left). An example electrical recording shows SELFOs (middle), the function of which is unknown, but a role in information integration and communication has been proposed [6,88]. Figure adapted from Figs 1d and 2 in [5]. (b) Electrophysiology of fungi (Pleurotus djamor) was investigated using extracellular electrodes placed in the cap and stalk of fruit bodies (left). An example electrical recording shows SELFOs (middle) of unknown function. Figure adapted from Figs 1b and 3b in [7] (http://creativecommons.org/licenses/by/4.0/). (c) Electrophysiology of single amoebae (Chaos chaos) was investigated using both intra- (V1) and extra- (Ex) cellular electrodes while the amoeba was held stationary in a glass chamber (left). An example electrical recording shows SELFOs (middle) of unknown function. Figure adapted with permission from Fig. 4a in [10]. (d) Electrophysiology of single bacteria (Escherichia coli) was investigated using the fluorescent genetically encoded voltage indicator PROPS (proteorhodopsin optical proton sensor) (left). Fluorescence intensity of individual bacteria over time shows spontaneous low-frequency oscillations in membrane potential (middle) of unknown function. Figure adapted with permission from Movie S1 and Fig. 2a in [11].

But it does not end there. Spontaneous low-frequency electrical activity has also been observed in unicellular eukaryotic and prokaryotic organisms. In 1964 researchers conducted electrophysiological experiments using both intra- and extracellular electrodes in two freshwater amoebae (Chaos chaos and Amoeba proteus) in an effort to determine why their cytoplasmic potassium concentrations were so high. Surprisingly, they discovered spontaneous action potential-like ‘spike potentials’ of low frequency (figure 3c), which prompted them to study these unexpected phenomena instead. They found the spontaneous ‘spike potentials’ could be modulated by various chemicals (e.g. ethyl ether, cocaine, potassium oxalate, CaCl2), but had no discernible effect on the cell's behaviour or morphology, leaving their function obscure [10]. Despite possessing many ion channels [91], the electrophysiology of bacteria was mostly unknown until recently owing to the difficulty of using traditional microelectrodes in very small cells with cell walls. The creation of a fluorescent voltage-sensitive protein in 2011, however, allowed visualization of the dynamic electrical properties of bacteria for the first time, revealing spontaneous, low-frequency action potential-like spikes in Escherichia coli not clearly related to behaviour (figure 3d) [11]. More recently, Bacillus subtilis in biofilms have been shown to engage in long-range electrical signalling via propagation of synchronized low-frequency potassium waves both within and between biofilms to coordinate nutrient sharing [12,13], further suggesting a potential role for low-frequency electrical oscillations in ‘organism’-wide information integration and communication.

Although very little is known about their function, SELFOs of some sort appear to be present in most organisms studied thus far, suggesting an important role in living systems.

6. Hypothesis: SELFOs as electrical organism organizers

So far we have reviewed the early discovery of the molecular head organizer in Hydra, seen that SELFOs of unknown function exist in the earliest nervous systems, learned how the SELFO in the human brain, the DMN, may contribute to the human self by acting as a brain-wide integrator and communicator, and discovered the widespread presence of SELFOs in other highly divergent phyla. Here, I will attempt to weave these threads together and briefly conjecture that SELFOs may be the ultimate organism-wide electrical information integrators and communicators in all biological systems at all levels of scale, making them critical for maintenance of organism unity and coherent, adaptive behaviour.

Since the discovery of the molecular head organizer in Hydra over 100 years ago, much has been learned about how organisms build their bodies [26,27]. That is, we have learned much about the spatial domain of biology—how multiple independent units (e.g. proteins in cells and cells in multicellular organisms) are coordinated in space to form a unified, structural whole. However, much less is known about the temporal domain of biology. Once a structural whole, a body, is built, how is it maintained and how is its activity coordinated in time? How does such a body constructed of many parts move and behave as one, coherent unit? Can the presence of a SELFO in nearly all living systems help answer these questions?

(a) Emergence of SELFOs in biological systems

To begin, we must consider what physics teaches us about the collective behaviour of non-living systems in which many individual subunits at a lower level of scale (e.g. individual H2O molecules) can give rise to various emergent properties at a higher level of scale (e.g. at the population level of many H2O molecules). There are three basic emergent phenomena non-living systems exhibit: total order (a solid in the case of water), total disorder (gas) or something in between (liquid) [92]. Unlike non-living systems, which tend toward equilibrium and can be found in any of these collective states, biological systems are generally considered to be self-organizing complex dynamical systems that tend to maintain themselves in the ‘somewhere in between’ category near the ‘edge of chaos’ where the system exhibits the most flexibility—not too ordered or rigid and not too disordered or chaotic [93,94]. Two main advantages of living on the ‘edge of chaos’ have been proposed: greater information flow through the system, and greater within-system flexibility of pattern formation and dissipation [95].

Given a vast potential state space, how biological systems maintain themselves within a critically narrow band of operation remains one of the major unanswered questions in biology. However, top-down feedback from higher levels of scale (e.g. organism) to subunits at lower levels (e.g. organs, cells in multicellular aggregates, proteins) is believed to play an important role [96–98]. Is it possible that SELFOs provide biological systems with electrical top-down feedback to maintain them in this dynamic, habitable state space? If so, how might they emerge from the lower-level subunits? Although the oscillations themselves have a similar character (figures 2 and 3), it is entirely likely they are generated by different mechanisms in different kinds of biological systems. We will now look at some possibilities in single-celled organisms, non-neural organisms and organisms with nervous systems.

As already noted, despite conventional thinking that neuronal cells are unique in their ability to conduct electrical signals, many non-neural cells, from bacteria to various human cells, exhibit electrical activity in the form of subthreshold membrane potential oscillations and neuron-like action potentials (figure 3) [99–101]. These activities are generally thought to arise from the passage of ions through membrane ion channels. However, recent work suggests proteins, rather than being electrical insulators (as long thought), may conduct significant current depending on their conformation [102]. Using a scanning tunnelling microscope, researchers demonstrated that six randomly selected proteins previously assumed to be electrically inert all efficiently conducted current when bound to their cognate ligands in their native aqueous environment [102]. These findings challenge the view of proteins as primarily engaged in building cellular structures, catalysing chemical reactions, and transducing inter- and intracellular signals via post-translational modification [103]. Instead, these results suggest that, rather than protein modifications being the signal, they may serve to affect the ‘real’ electrical signal by allowing or prohibiting current flow through proteins by changing their configuration. Thus, proteins may act as subcellular electrical ‘hardware’ (figure 4a) serving as both ‘wires’ and ‘transistors’ that are ‘opened’ and ‘closed’ based on various protein modifications, which affect their ability to conduct current.

Figure 4. Electrical signalling throughout life. (a) Electrical signalling in single cells in which the ‘hardware’ may consist of (i) proteins that pass current (blue arrow) depending on their configuration, which can be altered with protein modifications (such as phosphorylation as shown). These ‘hardware’ pieces can be arranged in different networks within cells (ii) to form electrical circuits encoding information about the internal (top right circuit) and external (bottom right circuit) states, which can feed up to the spontaneous electrical low-frequency oscillator (SELFO) at the top level to be integrated to produce abstract representations (i.e. ‘software’) (iii) of both the cell's internal state (i.e. ‘self’ model) and external environment (i.e. ‘world’ model). The top-level SELFO can then feed back down to coordinate and update the lower-level components. (b) The same general architecture applies at the next level of scale in non-neural tissue where the ‘hardware’ becomes (i) non-neural cells that can be connected via gap junctions to allow the passage of ions intercellularly (blue arrows) while ion channels function to conduct ions intra- or extracellularly. These ‘hardware’ pieces can be arranged in different configurations within non-neural tissue to form electrical circuits encoding information about the internal (top right circuit) and external (bottom right circuit) states, which can feed up to the SELFO at the top level to be integrated to produce abstract representations (i.e. ‘software’) (iii) of both the organism's internal state (i.e. ‘self’ model) and external environment (i.e. ‘world’ model). The top-level SELFO can then feed back down to coordinate and update the lower-level components. (c) In organisms with nervous systems, the ‘hardware’ is upgraded to neurons (i) that, in addition to chemical synapses, can be connected via gap junctions to allow the passage of ions intercellularly (blue arrows) while ion channels function to conduct ions intra- or extracellularly. These ‘hardware’ pieces can be arranged in different configurations within neural tissue to form faster and more complex electrical circuits encoding information about the internal (top right circuit) and external (bottom right circuit) states, which can similarly feed up to the SELFO at the top level to be integrated to produce abstract representations (i.e. ‘software’) (iii) of both the organism's internal state (i.e. ‘self’ model) and external environment (i.e. ‘world’ model). As in the other cases, the top-level SELFO can feed back down to coordinate and update the lower-level components.

This new work supports an old idea originally proposed by Albert Szent-Györgyi in 1941 [104]: that proteins with highly regular structure might act as electron semiconductors within cells, similar to ‘non-living’ materials like crystals. This theory never took hold despite significant supporting evidence, including from Szent-Györgi himself in 1980 who demonstrated electronic conduction in a variety of dry proteins (e.g. casein, BSA, collagen, lysozyme)—conductivity that was similarly altered based on protein conformational changes due to both chemical and electrical modifications [105]. In parallel, Michael Berry put forth an ‘electrochemical model of metabolism’ in 1981 where he argued that cellular metabolic pathways (e.g. glycolysis, gluconeogenesis) can only be explained in terms of both chemical and electrical flows in which the flow of electrons and protons through proteins is critical for driving chemical reactions, not just within membranes, but, likely, throughout the entire cell [106–108]. Berry likened biological cells to ‘micro-electrode arrays' composed of two material phases: the ‘solid-state phase’ (i.e. the highly ordered ‘microtrabecular lattice’ made up of cytoplasmic proteins and organelles that can pass current), and the surrounding ‘bulk aqueous phase’ (i.e. the ‘electrolyte’, which can supply current). On this view, the ‘microtrabecular lattice' is seen as a ‘protoneural network’ in which electric current is passed within and between protein networks and organelles, which drives chemical reactions (see [106–108] for a full discussion of this complex topic). While this model remains to be fully verified (see [109] for a recent review calling for more research in this direction), the recent demonstration of electronic conduction within proteins in their native aqueous environments lends it further support [102].

Altogether, this work suggests the electrical properties of cells may be highly complex and dynamic with proteins binding together to form electrical circuits that are embedded in a changing electrical environment driven by ion flows within the ‘aqueous phase’ of the cytoplasm [106–108]. Given the alterations in protein conductivity observed with different chemical and electrical modifications [102,105,108], conduction through protein ‘wires’ would be expected to be highly dynamic and responsive to the surrounding chemical and electrical milieu. Such an intricate electrical landscape may be sufficient to generate a complex electrical oscillation frequency structure within cells, similar to those found in mammalian brains (figure 4). If this proposal is remotely correct, a SELFO might emerge bottom-up on the intracellular level as a consequence of complex interactions of many proteins passing electric currents and ions moving within the cytoplasm. Such a SELFO could, in turn, feed back down to coordinate and constrain those same lower-level subunits.

In multicellular organisms without nervous systems, mounting evidence suggests that electrical communication between somatic cells (the electrical ‘hardware’ at this level of scale, figure 4b) occurs both directly, via ion flow through cell–cell gap junctions, and indirectly, via extracellular ion flow through ion channels [14,101]. As we have seen, intercellular electrical communication exists in bacterial biofilms, when potassium ions are pumped out of single bacterial cells, causing neighbouring cells to release extracellular potassium through their membrane potassium channels, thus propagating a long-range potassium wave that travels from the inside of the biofilm to the periphery [12,13]. The spontaneous emergence of such biofilm-wide low-frequency electrical oscillations has now been mathematically modelled and shown to arise from an intricate interplay of bacterial metabolic stress that is communicated long range via electrical signalling to coordinate the individual bacterial cell responses within the group [110]. Thus, from the start, it appears mechanisms were in place to allow low-frequency electrical oscillations to spontaneously arise from complex interactions between groups of non-neural cells, which then serve as a ‘top-down’ mechanism within the system to coordinate and constrain the lower-level components.

In addition to the long-range extracellular electrical potassium waves found in bacterial biofilms, a growing body of evidence shows both extracellular and intercellular electrical communication occurs between cells in a wide variety of non-neuronal organisms [14,101]. It is thought each cell type possesses a unique resting membrane potential, which, in most cases, may oscillate over time [101]. Thus, when coupling these non-neuronal cells together via cell–cell gap junctions these subthreshold membrane-potential oscillations can be transmitted from somatic cell to somatic cell, creating distinct electrical circuits throughout the organism depending on how the cells are connected (figure 4b) [111]. These body-wide subthreshold membrane potential circuits have been shown to play critical roles in both the development and maintenance of overall body structure and have been proposed as another potential ‘top-down’ mechanism organisms use to coordinate their many parts [14,112]. After tissue injury, for example, it appears the modulation of organism-wide electrical signals precedes changes in molecular signals, suggesting the faster electrical signals are likely coordinating and constraining the slower molecular components [113]. This work suggests organisms build and maintain their bodies via a continuous complex feedback loop between subcellular molecular components (e.g. ion channels and gap junctions) that affect electrical activity on a higher level of scale (e.g. the circuit level), which then feeds back down to affect both the transcription and behaviour of the molecular components [14]. Thus, in addition to the classical molecular gradients that have been well-established in developmental biology since the discovery of the ‘head organizer’ [27], there appears to be an electrical activity gradient that likely arises out of the lower-level molecular components that might serve to coordinate and constrain those same molecular components. Given the complex interactions of the many underlying subunits at both the subcellular and cellular scale in non-neural organisms, it may be that a SELFO generating neuron-like action potentials emerges out of these interactions to communicate information organism-wide in a faster manner, as has been observed in both plants [5,6] and fungi [7–9] (figure 3).

Finally, how might SELFOs be generated in organisms with nervous systems? Neurons have long been regarded as the most efficient electrical ‘hardware’ in biology, conducting current rapidly through their long, one-dimensional ‘tubes’ (i.e. axons) and connecting to form circuits via both gap junctions and chemical synapses (figure 4c). Using these parts, it may be, as Passano originally proposed for the cnidarian nervous system half a century ago, that the highly conserved oscillation frequency structure observed in all mammalian brains arises as a ‘hierarchy of pacemakers' [2,3]. Most neurons exhibit intrinsic pacemaker activity [114,115]. That is, when isolated in culture, neurons from many different nervous systems exhibit ongoing, spontaneous electrical oscillations of varying frequencies. According to the classic Huygens' clock experiment [116], if you connect two oscillators of similar frequency they will synchronize and start oscillating together. If many intrinsically oscillating single neurons are connected, it is plausible to conjecture they might spontaneously form groups of oscillators (i.e. ensembles), oscillating at the same frequency. In this way, nervous systems of all shapes and sizes may spontaneously self-assemble into higher-level structures (i.e. ensembles of various sizes oscillating at various frequencies) forming a ‘heirarchy of pacemakers' in which the biggest, slowest oscillator in the system might serve to coordinate and constrain all of the smaller, faster oscillators (figure 4c).

(b) Function of SELFOs in biological systems

Having considered how SELFOs may emerge bottom-up via a variety of mechanisms within biological organisms, we will now explore what they might do in more detail. Here, I propose three potential functions of SELFOs within living systems: (i) maintaining them at or near their critical point, (ii) integrating all the lower-level electrical information in the system, and (iii) continually communicating that high-level ‘view’ back down to the constituent components to both coordinate and update them on the overall state of the system to generate coherent, adaptive behaviour. While a thorough analysis of these potential functions is beyond the scope of this article, each will be briefly examined below.

(i) SELFOs maintain biological systems near criticality

As mentioned, one job of SELFOs may be to maintain organisms at or near their ‘critical point’ in state space by constraining them ‘top-down’ via their slow-wave electrical oscillations to allow both optimal information flow through the system and optimal flexibility of pattern formation and dissipation [95]. How might these properties be advantageous for living systems? Take the example of single cells, which contain many subcellular components (e.g. proteins). Like a glass of water, there are three general configurations a cell might be in with respect to its constituent parts, as discussed above: total order (proteins stuck in an unchanging state), total disorder/chaos (proteins moving about at random), and ‘somewhere in between’ (proteins form ‘patterns’—bind to each other to form useful structures—for a certain period of time before those patterns dissipate) [94]. To be most adaptive to its environment, a cell would do best by maintaining itself in the ‘somewhere in between’ state where patterns formed by its proteins are maintained for just enough time for them to be useful, but not too long such that they end up in a fixed, non-adaptive state (with all proteins stuck in one configuration, i.e. cell death) [93].

This advantage also applies to non-neural and neural organisms and is best understood in terms of the human brain, which is thought to maintain itself near criticality [117]. Interestingly, evidence suggests it is the SELFO in the human brain, the DMN, that might maintain the system near its critical point as disruption of the DMN results in more ‘fluid’ brain states in which neural activity patterns are more disordered and chaotic, correlating with psychedelic and psychotic states [57,118,119]. Conversely, over-active DMN activity results in more ‘stuck’ brain states in which neural activity patterns are more ordered, correlating with rumination and anxious or depressed states [57,117,120–122]. A totally ordered brain would be one in which either all neurons are off (i.e. brain death) or all neurons are firing in synchrony (i.e. seizure)—neither of which is a very useful state for the organism. Thus, SELFOs might serve to maintain biological systems near their critical point to allow both optimal information flow and pattern formation that is not too ordered nor too disordered.

(ii) SELFOs as organism-wide information integrators

The second role SELFOs may play in living systems is as organism-wide electrical information integrators. As discussed, all biological systems are composed of many constantly changing parts that must continually cooperate to form a unified whole that can both maintain its structure (i.e. its body) and move it to generate coherent, adaptive behaviour. This implies some part of the system must have access, however indirectly, to all the information within the system. No single subunit (e.g. single molecule in a cell or single cell in an organism) can have access to all the information in the system—that is the wrong level of scale [123]. However, a SELFO generated by those lower-level components, thus operating at a higher level of scale, could, in principle, receive information about all the lower-level subunits by integrating all the bottom-up electrical information in the system, as reviewed in Section 4 above, and outlined in figure 4. As such, SELFOs may act as the ultimate integrators in biological systems, which integrate all the lower-level electrical information being sent ‘up’ in increasingly higher levels of abstract representations of both the internal state of the system and the external environment the system is encountering. In this way, the SELFO would ultimately receive and ‘view’ all of the highest-level abstract representations of both the organism (i.e. its ‘self’) and its environment (i.e. its ‘world’), thus forming one integrated ‘self’/’world’ model (figure 4).

This view suggests the SELFO may thus continuously receive bottom-up electrical information from the entire system which it then might integrate over a specific time window based on its frequency, before taking a ‘snapshot’ of the organism and its environment—much like a camera chip integrates photons over a specified exposure time before taking a picture. Interestingly, biological SELFOs do not appear to maintain consistent oscillation frequencies. Rather, they appear to change their frequencies in response to different stimuli [2,6,7], as discussed above in the case of fungi that increase the frequency of their SELFOs in response to sulfuric acid, water, malt extract and wood, and decrease their frequency when the wood stimulus is removed [9]. Thus, it may be that SELFOs in each biological system have a specific baseline frequency range, determined by each organism's unique makeup, in which organisms might maintain a certain mid-range ‘baseline’ SELFO frequency that can be altered in response to both internal and external input. For example, if the organism is at rest and everything is as expected both internally and externally, the SELFO may integrate over a longer period of time (i.e. wait longer between snapshots of its ‘self’ and its environment) and thus update its ‘self’/’world’ model less frequently as nothing much is changing. However, if the organism encounters something unexpected (e.g. a predator is nearby) the SELFO may integrate over a shorter period of time (i.e. take more frequent snapshots of its ‘self’ and its environment) to increase its temporal resolution and update its ‘self’/’world’ model more frequently as it experiences faster change.

Another potential consequence of adjusting the SELFO frequency based on internal and external input may be a simultaneous modulation of the system's position in state space such that lowering the SELFO frequency (i.e. increasing its integration time) may result in a more ‘fluid’ system when it is ‘resting’ in an expected state and increasing the SELFO frequency (i.e. decreasing its integration time) may result in a more ‘constricted’ or ‘rigid’ system when it is in an unexpected or ‘stressed’ state.

(iii) SELFOs as organism-wide synchronizers and communicators

In addition to receiving all the electrical information in the organism, a third function SELFOs might serve is to transmit such integrated, high-level information back to their lower-level parts via organism-wide, synchronous firing to coordinate and constrain them. Ideally, the same signal would reach each component simultaneously such that the SELFO could serve as a ‘master clock’ for the organism to coordinate all parts in time. If biological systems are poised at or near criticality, information flow through the system would be optimal, ensuring the SELFO can both receive and send system-wide information most rapidly [95]. Unlike typical machine clocks, however, which are precisely designed to maintain regular oscillations to ensure consistent timing devices [124], the SELFO ‘master clock’ found in biological organisms appears to be constantly altering its frequency based on internal and external input [2,6,7,9,10], as previously discussed. Hence, in addition to serving as a timing device, SELFOs/biological ‘master clocks’ may also transmit information about the state of the system by changing their frequencies (i.e. changing their clocking intervals). The SELFOs/biological ‘master clocks’ thus appear to be intrinsically adjustable oscillators (i.e. adjustable clocks), adjusted internally by their own continually changing components, in contrast to adjustable oscillators in machines, like radios, which must be adjusted externally [124].

Such an intrinsically adaptable SELFO/biological ‘master clock’ would allow the organism to adjust three main parameters on-the-fly simultaneously, as partially reviewed above: (i) its integration time (i.e. how long it ‘reads’ bottom-up information to get a ‘snapshot’ of its ‘self’ and its ‘world’), (ii) its position in state space (i.e. how ‘fluid’ or ‘rigid’ the system is), and (iii) how often it will update its lower-level components. As discussed above, if an organism is at rest and everything is as expected it might want not only to integrate and update its ‘self’/’world’ model less frequently, but also to update its lower-level components less frequently to conserve energy as firing action potentials is energetically expensive [47]. Alternatively, if the organism encounters something unexpected it might increase its firing rate not only to integrate and update its ‘self’/’world’ model more frequently, as above, but also to update its downstream components more often to alert them of potential internal or external changes to the system or its environment. Thus, the synchronous firing of the SELFO might not only provide top-down sub-component coordination to ensure organism unity, it might also communicate information about the overall state of the system via changes in its frequency—changes that are a result of ongoing bottom-up input from all of its constituent parts, thus making it a highly adaptable intrinsically adjustable oscillator/‘master clock’.

7. Conclusion

The picture sketched above is highly speculative. However, there is a real phenomenon to be explained—SELFOs, which are highly conserved in mammals, have been ‘discovered’ multiple times in Cnidaria and now in many other widely divergent phyla, including plants and single-celled organisms. To date, this activity has attracted little attention outside of human neuroimaging studies with poor spatial and temporal resolution.

As the only animal whose entire nervous system can currently be imaged simultaneously at single-cell resolution while behaving [1], Hydra will have an important role to play in this investigation. It remains unclear if the Hydra SELFO, its RP1 network active ‘at rest,’ is also involved in generating behaviour (elongation) as proposed in the recent work (figure 1) as there was not an obvious relationship between RP1 activity and elongation, causing the authors to speculate that rather than directly generating behaviour, RP1 may serve to integrate sensory information and coordinate behaviour [1]. This is an important distinction that can be tested in further high-resolution studies of the freshwater polyp in which RP1 activity (its frequency, amplitude and phase) can be precisely measured during different behaviours and during ‘rest’ to more definitively determine whether the Hydra SELFO is only involved in non-behaviour-generating processes (i.e. ‘rest’ and behaviour coordination) or also plays a role in direct behaviour-generation. In addition to its relationship with Hydra behaviour, the relationship of RP1 activity with the other neural networks can also be rigorously assessed to determine if it does, indeed, coordinate them, and, if so, precisely how. The most definitive experiment would be disruption of the Hydra SELFO by optogenetic, pharmacologic or physical means. Two major findings would be expected based on the above hypotheses regarding the potential role of SELFOs: (i) a more ‘disordered’ Hydra nervous system owing to the loss of ‘top-down’ feedback to keep the system at or near criticality, and (ii) less coordinated behaviour owing to the loss of organism-wide electrical information integration and communication required to maintain organism unity.

In addition to allowing loss-of-function experiments, Hydra is also uniquely suited to study the natural development and function of SELFOs as it reproduces asexually by budding [125]. This allows the study of how multiple bodies (i.e. buds and parents attached to each other) might function as one coherent organism while sharing the same synchronous SELFO (i.e. sharing the same electrical organism organizer) and subsequently start to function as multiple uncoordinated, individual organisms (while still physically attached) with the development of asynchronous, separate SELFOs (i.e. two separate electrical organism organizers). Hydra also possesses remarkable regenerative capacities [20], allowing myriad cutting and grafting experiments in which animals of all shapes and sizes can be generated with varying numbers of neurons and SELFOs to explore how coordinated versus uncoordinated activity might emerge in these structures (e.g. Hydra with multiple heads, multiple feet, no head, no foot, or any combination thereof). Not only does Hydra regenerate when cut, it also forms a new whole animal from totally dissociated single cells [126], allowing the study of the emergence of SELFOs and organism-wide coordination within a group of dissociated elements. Lastly, an adult Hydra is constantly rebuilding itself, turning over all of its parts every 20 days [127], which allows the study of how an organism maintains its body, its nervous system and SELFO, and coherent behaviour despite ever-changing components.

Since the discovery of Hydra over 300 years ago, this simple animal has taught us a great deal about biological systems—primarily, how organisms build their bodies using molecules. Now, Hydra is beginning to reveal the secrets of its nervous system, in which ‘cryptic’ SELFOs have been lying in wait since their initial discovery in the 1960s. It is no surprise such spontaneous neural activity was overlooked as there was no place for it within the dominant input–output paradigm inherited from Sherrington [45]. The recent unexpected discovery of a SELFO, the DMN, in the human brain, however, has required a substantial revision of such a ‘reflex’ model of nervous systems and reignited interest in endogenous neural activity. Since its discovery, the DMN has become increasingly linked to the ‘self’ in humans, potentially acting as a brain-wide integrator, but its precise function and mechanism remains obscure given the limitations of both imaging and manipulating human brains. Interestingly, the same kind of spontaneous electrical activity found in the human DMN appears to be highly conserved throughout life. The widespread presence of SELFOs suggests they may be playing an important role in organism-wide integration and communication in biological systems at all levels of scale and opens the door to their study in more experimentally tractable systems, such as Hydra. As throughout the history of biology, this basal animal is poised to once again teach us about another fundamental aspect of living systems: this time, how organisms create and maintain coherent, adaptive wholes using electricity. Insights gained in Hydra, as before, are likely to apply to biological systems at all levels of scale, from bacteria to humans, and have important implications for psychiatry, neurology and, potentially, tumorigenesis.


Results

From firing patterns to firing pattern phenotypes

Version 1.3 of Hippocampome.org contains suitable electrophysiological recordings for 90 of the 122 morphologically identified neuron types. Applying the firing pattern identification algorithm to these digitized data resulted in the detection of 23 different firing patterns. A given neuron type may demonstrate distinct firing patterns in response to different stimuli or conditions. The set of firing patterns exhibited by a given neuron type forms its firing pattern phenotype.

The simplest case consists of those neuron types that systematically demonstrate the same firing pattern independent of experimental conditions or stimulation intensity. These neuron types may still display quantitatively different responses to stimuli of various amplitudes (typically increasing their firing frequency upon increasing stimulation), but their qualitative firing patterns remain the same. We identified 37 such “individual/simple-behavior types” in Hippocampome.org, as exemplified by DG Basket cells with their NASP phenotype 37 .

In contrast to the above scenario, certain neuron types exhibit qualitatively distinct firing patterns in response to different amplitudes of stimulation. We identified 20 such “multi-behavior” types for instance, medial EC Layer V-VI Pyramidal-Polymorphic cells demonstrate delayed non-adapting and adapting spiking 14 , or CA1 Neurogliaform projecting cells 30 display adapting spiking and persistent stuttering at different stimulus intensities. The firing phenotypes of these neurons thus consist of the combinations of two firing patterns.

In a different set of cases, subsets of neurons from the same morphologically identified type display distinct firing patterns under the same experimental conditions (typically from the same study) in response to identical stimulation. These neuron types can thus be divided into electrophysiological subtypes. For example, of the CA3 Spiny Lucidum interneurons, some are adapting spikers whereas others are persistent stutterers 38 . In certain neuron types, one or more of the subtypes could also display multiple behaviors at different stimulation intensities. For instance, a subset of entorhinal Layer III Pyramidal neurons consists of non-adapting spikers and another subset switches from ASP.NASP at rheobase to RASP.ASP. at higher stimuli 14 . Of the 90 neuron types with firing patterns in Hippocampome.org, 22 could be divided into 52 electrophysiological subtypes. Notably, these included the principal neurons of most sub-regions of the hippocampal formation: CA3, CA1, and subiculum Pyramidal cells, entorhinal Spiny Stellate cells, but also several GABAergic interneurons such as dentate Total Molecular Layer (TML) cells 39 . Specifically, 8 neuron types yielded 18 subtypes exclusively demonstrating single behaviors for 11 neuron types, at least one of the subtypes exhibited multi-behaviors, resulting in 13 multi-behavior subtypes and 13 additional single-behavior subtypes.

This meta-analysis is complicated by the variety of experimental conditions used in the published literature from which the electrophysiological data were extracted. Several differences in materials and methods could affect firing patterns above and beyond common species (rats vs. mice) or recording (patch clamp vs. microelectrode). For example, 30% of experimental traces were recorded from transverse slices, 24% from horizontal, 8% coronal, 29% mixed (e.g. “horizontal or semicoronal”), and 9% other directions (e.g. custom angles). Furthermore, pipettes were filled with potassium gluconate in 69% of cases, with potassium methylsulfate in 22%, and with potassium acetate in 9% (see e.g. Supplementary Table S2). While these different experimental conditions can affect membrane biophysics substantially 40 and often quantitatively influence neuronal firing (e.g. changing the spiking frequency), occasionally they can also cause a qualitative switch between distinct firing patterns. A striking case is that of rat DG Granule cells, which have demonstrated transient slow-wave burst followed by silence in whole-cell recordings of horizontal slices from Sprague-Dawley animals 41 delayed non-adapting spiking in whole-cell recording of transverse slices from Wistar animals 16 or adapting spiking in intracellular recording of horizontal slices from Wistar animals 42 . Because the different firing patterns could be caused by the differences in experimental methods, we annotate a possible “condition-dependence,” but cannot conclusively categorize these cells as multi-behavior or subtypes. Most of the condition-dependent behaviors could be attributed at least in part to the occasional use of microelectrode instead of patch-clamp (now considered the preferred recording method) or the animal species as in the case of CA1 Horizontal Basket cells, which display adapting and non-adapting firing in rats and mice, respectively 19,43 .

Condition dependence can alter the firing patterns not only in cell types with single behaviors, such as MOPP cells 42,44 , but also in multi-behavior neuron types, such as CA1 Axo-axonic cells 18,45 . These cases account for 6 and 5 Hippocampome.org neuron types, respectively. Lastly, condition dependence may also be found in specific electrophysiological subtypes, whether they display single behaviors, such as CA1 Pyramidal neurons 28,43,46,47 or multi-behavior, such as entorhinal Layer V Deep Pyramidal neurons 14,27,48 . These cases respectively account for 2 and 1 Hippocampome.org neuron types, in turn giving rise to 6 condition-dependent subtypes with single behaviors and 2 condition-dependent subtypes with multi-behavior. In general, types/subtypes with firing pattern recorded under diverse experimental conditions constitute only 16 percent of the total number of types/subtypes with available recordings.

Figure 2 presents the full firing-pattern phenotypes of all 90 Hippocampome.org neurons, with available data in form of separate matrices for the 68 individual neuron types (Fig. 2a) and the 52 subtypes divided from the remaining 22 types (Fig. 2b). In both cases the simple behaviors constitute larger proportions than multi-behavior, with condition dependence only reported for a minority of types and subtypes (Fig. 2c). Across these neuron types/subtypes, 44 distinct phenotypes can be identified as unique combinations of firing patterns, excluding those that differ from others only by the absence of a detectable stable state in one of the firing patterns (like ASP. versus ASP.NASP or ASP.SLN). An interactive online version of these matrices is available at hippocampome.org/php/firing_patterns.php.

Identified firing patterns and firing pattern phenotypes complexity of neuron types (a) and subtypes (b). Online matrix: hippocampome.org/firing_patterns.php. Green and red cell type/subtype names denote excitatory (e) and inhibitory (i) neurons, respectively. FPP is firing pattern phenotype. The numbers in the brackets correspond to the order in which the cell types were presented in the Hippocampome.org (ver. 1.3). The orange asterisk denotes different experimental conditions. (c) Complexity of firing pattern phenotypes percentages and ratios indicate occurrences of phenotypes of different complexity among 120 cell types/subtypes.

Dissecting firing patterns into firing pattern elements across neuron types

Firing patterns and firing pattern elements are also diverse with respect to their relative frequency of occurrence among hippocampal neuron types. Firing patterns can be grouped based on the number of elements comprising them, namely single (e.g., NASP or PSTUT), double (e.g. ASP.NASP or TSWB.SLN), and triple (D.RASP.NASP and D.TSWB.NASP) or based on whether they are completed (ASP.NASP, TSWB.SLN) or uncompleted, as in ASP., RASP.ASP., and TSTUT.ASP. (Fig. 3a). Of the nine firing pattern elements, the most frequent are ASP and NASP, while the least common are TSTUT, TSWB, and PSWB (Fig. 3b). Notably, accelerated spiking (ACSP) has not been reported in the rodent hippocampus although it is commonly observed in other neural systems, such as turtle ventral horn interneurons 49 and motoneurons 50 .

Occurrence of firing patterns, firing pattern elements and firing pattern phenotypes among the hippocampal formation neuron types. (a) Distribution of 23 firing patterns total numbers are shown above the bars. (b) Distribution of 9 firing pattern elements total numbers are in parentheses below and percentages of occurrence among 90 cell types are above the bars. (c) Relationships between firing pattern elements in the firing patterns of hippocampal neuron types. Numbers of cell types with distinctive firing patterns are indicated.

The relationships between sets of firing pattern elements observed in hippocampal neuron types can be summarized in a Venn diagram with firing pattern elements represented as ellipses and the intersections thereof corresponding to complex firing patterns (Fig. 3c). This analysis highlights the following features: the four main firing transients (ASP., RASP., TSTUT., TSWB.) often end either with NASP or with SLN ASP. is often preceded by RASP. and occasionally by TSTUT. interrupted steady-state firings (PSTUT and PSWB) stand out as a separate group and delay (D.) most often precedes NASP. Fifteen of possible 38 completed firing patterns were discovered in the literature for morphologically identified hippocampal neuron types (Table S3 in Supplementary Information).

Classification and distribution of firing pattern phenotypes

In order to classify the 44 unique firing pattern phenotypes observed in the hippocampal formation, we weighted the constituent firing pattern elements according to the frequency of occurrence among 120 neuron types and electrophysiological subtypes (see Methods). As a result, infrequent firing pattern elements (PSWB, TSTUT and TSWB) received high weights (0.99, 0.95 and 0.93, respectively), very frequent elements (ASP and NASP) were assigned low weights (0.42 and 0.41), and common elements (D, RASP, PSTUT and SLN) obtained intermediate weights (0.90, 0.80, 0.88 and 0.87). Two-step cluster analysis identified ten firing pattern families as leaves of a seven-level hierarchical binary tree (Fig. 4a). At the highest level, hippocampal neuron types and subtypes are divided into two major groups: those with spiking phenotypes (78%) and those with interrupted firing phenotypes (22%). The latter are separated into bursting (6%) and stuttering (16%), and each of these is subdivided into persistent and non-persistent families. A first group of the neuron types with spiking phenotypes is distinguished based on delay (9% of cell types). The remaining neuron types split into adapting (54%) and non-adapting phenotypes (15%). The adapting group consists of neuron types with rapidly adapting phenotypes (18%) and normally adapting (36%) phenotypes. Among the normally adapting group, the following phenotypes can be distinguished: discontinuous adapting spiking (6%) with ASP.SLN pattern, adapting-non-adapting spiking (15%) with ASP.NASP patterns, and a last “spurious” phenotype of uncompleted adapting spiking (15%) with ASP. pattern only, for which the steady state (SLN or NASP) was not determined. This division of the adapting spiking groups reflects differences in adaptation rates, duration, and subsequent steady states.

Firing-pattern phenotype families from 120 neuron types/subtypes. (a) Hierarchical tree resulting from two-step clustering of weighted firing pattern elements with representative examples of cell types/subtypes that belong to one of the corresponding firing-pattern phenotype families. Note that the simple adapting spiking pattern (ASP. only) constitutes a “spurious” phenotype of uncompleted adapting spiking (15%), for which the steady state (SLN or NASP) was not determined. (b) Percentage of occurrence of firing-pattern elements in families of firing pattern phenotypes. (c) Relative proportions of firing-pattern phenotype families among neuron types/subtypes. Green and red numbers represent excitatory and inhibitory cell types/subtypes as enumerated in Fig. 2. (d) Distribution of firing-pattern phenotype families in sub-regions of the hippocampal formation. FPP% is percentage of expression of families of firing pattern phenotypes.

This analysis also highlights the most distinguishing firing pattern elements of each family (Fig. 4b). In particular, D. is the defining element for delayed spiking, PSTUT for persistent stuttering, ASP. and SLN for discontinuous adapting spiking. Each of the four major elements of interrupted firing patterns (PSWB, PSTUT, TSWB. and TSTUT.) is observed in a single firing pattern phenotype (persistent bursting, non-persistent bursting, persistent stuttering, and non-persistent stuttering, respectively). Other firing pattern elements (D., RASP., ASP., NASP, and SLN) appear in several firing pattern phenotypes. The proportions of non-defining firing pattern elements range from 5% to 83%.

The families of firing pattern phenotypes are differentially distributed within the set of 120 neuron types/subtypes (Fig. 4c). Certain phenotype families are associated with excitatory neuron types, either exclusively (e.g. persistent bursting and non-persistent bursting) or predominantly (non-persistent stuttering, rapidly adapting, and adapting-non-adapting spiking). Conversely, persistent stuttering, delayed spiking, non-adapting spiking and simple adapting spiking are phenotypes composed largely by inhibitory neuron types. The discontinuous adapting spiking phenotype has relatively balanced proportions of excitatory and inhibitory neuron types.

The firing pattern phenotypes also have different distributions among the sub-regions of the hippocampal formation (Fig. 4d). Among CA1 neuron types, the persistent stuttering (16%), non-adapting (24%), simple adapting (16%), and rapidly adapting spiking (13%) phenotypes are more common than other phenotypes in DG, the most expressed phenotypes are delayed (20%), rapidly adapting (20%), and simple adapting spiking (15%) in EC, ASP-NASP (61%), discontinuous ASP. (11%), RASP. (28%), and NASP (19%) occur more often than other phenotypes.

Usage of information from Hippocampome.org

Searching and browsing

The addition of firing pattern data to Hippocampome.org extends opportunities for broad-scope analytics and quick-use checks of neuron types. Similar to morphological, molecular, and biophysical information, firing patterns and their parameters can be browsed online with the interactive versions of the matrices presented in Fig. 2 (hippocampome.org/php/firing_patterns.php), along with an accompanying matrix to browse the stimulation parameters (duration and intensity) and the firing pattern parameters (delay, number of inter-spike intervals, etc.). Moreover, all classification and analysis results reported here can be searched with queries containing AND & OR Boolean logic using an intuitive graphical user interface (see Hippocampome.org → Search → Neuron Type). The integration within the existing comprehensive knowledge base enables any combination of both qualitative (e.g. PSTUT) and quantitative (e.g. (<>_<>^<> > 4,<>_<+1>) ) firing pattern properties, with molecular (e.g. calbindin-negative), morphological (e.g. axons in CA1 pyramidal layer), and biophysical (e.g. action potential width < 0.8 ms) filters (Fig. 5). For example, of 13 neuron types with persistent stuttering, in 7 the largest inter-spike interval (ISI max i) is more than 4 times longer than the subsequent ISI (ISIi+1). When adding the other three selected criteria, the compound search leads to a single hit: CA1 Axo-axonic neurons (Fig. 5a). Clicking on this result leads to the interactive neuron page (Fig. 5b) where all information associated with a given neuron type is logically organized, including synonyms, morphology, biophysical parameters, molecular markers, synaptic connectivity, and firing patterns. Every property on the neuron pages and browse matrices, including firing patterns and their parameters, links to a specific evidence page that lists all supporting bibliographic citations, complete with extracted quotes and figures (Fig. 5c). The evidence page also contains a table with all corresponding firing pattern parameters (Fig. 5d), experimental details including information about animals (Fig. 5e), preparations (Fig. 5f), recording method and intra-pipette solution (Fig. 5g), ACSF (Fig. 5h), and a downloadable file of inter-spike intervals (Fig. 5i).

Hippocampome.org enables searching neuron types by neurotransmitter axon, dendrite, and soma locations molecular expression electrophysiological parameters input/output connectivity firing patterns, and firing pattern parameters. (a) Sample query for calbindin-negative neuron types with axons in CA1 stratum pyramidale, APwidth < 0.8 ms, PSTUT firing, and ratio of maximum ISI to the next ISI greater than 4. Numbers in parentheses indicate the number of neuron types with the selected property or specific combination of properties. (b) Search results are linked to the neuron page(s). (c) The neuron page is linked to the firing pattern evidence page. Original data extracted from Pawelzik et al. 18 . All firing patterns parameters (d), experimental details including information about animals (e), preparations (f), recording method and intra-pipette solution (g), as well as ACSF composition (h) can be displayed. (i) Downloadable comma-separated-value file of inter-spike intervals.

The portal also reports, when available, the original firing pattern name descriptions used by the authors of the referenced publication (Hippocampome.org → Search → Original Firing Pattern) and provides links to corresponding published models from ModelDB (https://senselab.med.yale.edu/modeldb/).

Statistical analysis of categorical data

Firing pattern information more than doubles the Hippocampome.org knowledge base capacity to over 27,000 pieces of knowledge, that is, associations between neuron types and their properties. This extension allows for the confirmation of known tendencies and unearthing hidden relationships between firing patterns and molecular, biophysical, and morphological data in hippocampal neurons, which are otherwise difficult to find amongst the large body of literature. We computed p-values using Bernard’s exact test for 2 × 2 contingency tables. Comparisons of observable firing pattern elements, with molecular markers expression, electrophysiological parameters, primary neurotransmitter, and axonal projecting properties, with p-values less than 0.05 and false discovery rates less than 0.25, ended with 29 statistically significant correlations. Several interesting examples of such findings are presented in Fig. 6. For instance, adapting spiking (ASP.) tends to co-occur with expression of cholecystokinin (p = 0.0113 with Barnard’s exact test from all n = 26 pieces of evidence see Lee et al. 51 as an example) moreover silence (SLN) after short firing discharge is not observed in neuron types with low (lower tercile of) membrane time constant (n = 32, p = 0.0235).

Examples of statistically significant correlations between firing pattern elements and known molecular, morphological and electrophysiological properties in hippocampal neurons. The p values are computed using Bernard’s exact test for 2 × 2 contingency tables and satisfy FDR < 0.25 (see Methods).

Analysis of numerical electrophysiological data

The extracted quantitative data allow one to study the relationship between firing pattern parameters and membrane biophysics or spike characteristics, such as the correlations between minimum inter-spike intervals (ISImin) and action potential width (APwidth). We analyzed these two variables in the 81 neuron types and subtypes for which both measurements are available (Fig. 7). The scatter plot of APwidth against ISImin reveals several distinct groupings (Fig. 7a), and the corresponding histograms (Fig. 7b,c) demonstrate poly-modal distributions. The horizontal dashed line (ISImin = 34 ms) separates 9 neurons with slow spikes (all excitatory except one) from 72 neurons (61% of which are inhibitory) with fast and moderate spikes. The latter group shows a general trend of ISImin rise with increasing APwidth (black dashed line in panel A). This trend was adequately fit with a linear function Y = 13.79X 0.05 (R = 0.72 p = 0.03). Neuron types with slow spikes demonstrate the opposite trend, which was fit with a decreasing linear function Y = 26.72X + 76.42 (R = −0.91, p = 10 −6 ). Furthermore, the neuron types can be separated by spike width. The vertical dashed lines w1 (APwidth = 0.73 ms) and w2 (APwidth = 1.12 ms) separate neuron types with narrow, medium and wide action potentials. The group of neuron types with narrow spikes (n = 22) includes only inhibitory neurons, which have APwidth in the range from 0.20 to 0.73 ms (0.54 ± 0.12 ms). In contrast, the group of neuron types with wide spikes (n = 28) contains only excitatory neurons with APwidth in the range from 1.13 to 2.10 ms (1.49 ± 0.23 ms). The group of neuron types with medium spikes (n = 31), with APwidth range from 0.74 to 1.12 ms (0.89 ± 0.12 ms), includes a mix of inhibitory (74%) and excitatory (26%) neurons.

Relationships between the width of action potentials (APwidth) and minimum of inter-spike intervals (ISImin) for 84 neuron types and subtypes. (a) APwidth − ISImin scatter diagram with results of linear regression. Green triangles and red circles indicate excitatory and inhibitory neurons, respectively. Dashed orange lines: horizontal line separates neurons with slow spikes from neurons with fast and moderate spikes vertical lines (w1 and w2) separate neurons with narrow, medium and wide action potentials. Black lines: solid line shows linear fitting for slow spike neurons with a function Y = −26.72X + 76.42 (R 2 = 0.83) dashed line shows general linear fitting for fast and moderate spike neurons with a function Y = 13.79X 0.05 (R 2 = 0.52). (b) APwidth histogram. (c) ISI histogram.

Among the 22 neuron types/subtypes from the group with APwidth < 0.72 ms, 13 demonstrated so-called fast spiking behavior, which is distinguished by narrow spikes, high firing rate, and the absence or weak expression of spike frequency adaptation 52 . Besides these common characteristics, however, their firing patterns vary broadly even from a qualitative standpoint. Five of these 13 neuron types belong to the PSTUT family, namely CA3 Trilaminar 53 , CA3 Aspiny Lucidum ORAX 54 , CA2 Basket 17 , CA1 Axo-axonic 18 , and CA1 Radial Trilaminar 19 . Three types belong to the NASP family: DG Basket 37 , CA1 Horizontal Axo-axonic 19 , and MEC LIII Superficial Multipolar Interneuron 55 . Two types, CA3 Axo-axonic 56 and CA2 Bistratified 17 , belong to the simple adapting spiking family two types, DG HICAP 39 and DG AIPRIM 16 , belong to the ASP-NASP family and lastly CA1 Basket 51 belongs to non-persistent stuttering family.

Additionally, firing pattern families are unequally distributed among the groupings revealed by the above analysis. Persistent and non-persistent stuttering families and non-persistent bursting phenotypes are composed entirely of neuron types with narrow and medium fast/moderate spikes. Conversely, the rapidly adapting – non-adapting spiking phenotype is represented solely by neurons with spikes of intermediate width.


METHODS

Preparations

Data were recorded from the somata of neurons isolated from the superior part of the vestibular ganglion, most from Long𠄾vans rats on postnatal day 0 (P0, day of birth) to P16, and some from mice (129/Sv strain), P0–P8. Animals were handled in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals and all procedures were approved by the animal care committee at the Massachusetts Eye and Ear Infirmary. Chemicals were obtained from Sigma-Aldrich (St. Louis, MO) unless otherwise specified.

Temporal bones were dissected in chilled and oxygenated Liebovitz-15 (L-15) medium supplemented with 10 mM HEPES (pH 7.4, � mmol/kg) we will refer to this as the standard medium. The otic capsule was exposed and the superior part of the vestibular ganglion was detached from its distal and central nerve branches. The superior compartment supplies the utricular macula, the cristae of the lateral and anterior semicircular canals, and part of the saccular macula. The ganglion tissue was placed in the standard medium to which had been added 0.05% collagenase and 0.25% trypsin, for 25� min at 37ଌ. The ganglion tissue was then mechanically dissociated by trituration into either standard medium or a bicarbonate-buffered medium (see following text). Cells settled onto a glass-bottom culture dish precoated with poly- d -lysine.

Recordings were made after periods of 1𠄸 h (acute n = 42 of 179 cells), 16� h (1 day in vitro n = 123), or 40� h (2 days in vitro n = 14). To compare cells at different ages, we assigned their age as the sum of the age at dissection plus the number of days spent in vitro thus a cell recorded acutely at P9 would be compared with a cell dissociated on P7 and maintained for 2 days in vitro. Acute preparations were successful only at <P7 at older ages, satellite cells on the somata prevented sealing on the neuronal membranes with patch pipettes. Storing the cells for longer periods tended to remove the satellite cells, allowing recordings from older cells. Two storage conditions were used: 1) cells were stored overnight in standard medium (HEPES-supplemented L-15) at 4ଌ or 2) cells were dissociated in bicarbonate-buffered culture medium (minimal essential medium [MEM], Invitrogen, Carlsbad, CA), supplemented with 10 mM HEPES and 1% penicillin-streptomycin (Invitrogen) and incubated overnight or longer in 5% CO2-95% air at 37ଌ. Although neurons (and satellite cells) from older animals survived better with the second culturing method, the number of surviving cells decreased with age for both methods. For age-matched neurons from the different preparations (acutely dissociated, stored at 4ଌ or incubated at 37ଌ), we found no significant differences in the properties we measured and therefore have pooled the results.

Electrophysiology

RECORDINGS.

Isolated cells were viewed at 󗘰 with an inverted microscope (Olympus IMT-2 Olympus, Lake Success, NY) equipped with Nomarski optics. To avoid contamination by axonal membrane, we present data only from cells that lacked visible processes. Signals were delivered, recorded, and amplified either with an Axopatch 200A amplifier and Digidata 1200 data acquisition board controlled by pClamp 8 software or with an Axopatch 200B amplifier, Digidata 1440 board, and pClamp 10 software (MDS, Toronto, Canada). To reduce distortion of action potential shape (Magistretti et al. 1998), we recorded in the fast current-clamp mode of the amplifier.

We used filamented borosilicate glass recording pipettes with resistances of 3𠄵 MΩ in our standard solutions. To reduce pipette capacitance, we either wrapped electrode shanks in parafilm or coated them with a silicone elastomer (Sylgard 184 Dow Corning, Midland, MI). During ruptured-patch whole cell recordings, the loss of cellular contents can change the properties of ion channels. To minimize such changes, we used the perforated-patch method of whole cell recording in which the membrane patch contacted by the electrode is perforated by amphotericin B (Sigma-Aldrich). The amphotericin-B pores allow only small monovalent ions to pass freely (Horn and Marty 1988). The pipette solution for perforated-patch recordings contained (in mM): 75 K2SO4, 25 KCl, 5 MgCl2, 5 HEPES, 5 EGTA, 0.1 CaCl2, and 240 μg/ml amphotericin B, titrated with 13 mM KOH to a pH of 7.3 and for a final K + concentration of about 188 mM, about 270 mmol/kg. During recordings, series resistance ranged from 12 to 27 MΩ and was compensated electronically by 80 to 50%, respectively, for final resistances of 2.4 to 12 MΩ. Activation curves (conductance–voltage [g–V] curves) were corrected off-line for residual (uncompensated) series resistance. All membrane voltages were corrected off-line for a 5.1 mV junction potential, computed with JPCalc (Barry 1994) as implemented in pClamp 10.2.

To monitor time-dependent changes in the neurons, we collected responses to standard protocols (100-ms current and voltage steps) at regular intervals. Only recordings obtained with GΩ seals and while the resting potential was stable and negative to � mV were included for further analysis. The bath contained fresh oxygenated standard medium, which has 5.7 mM K + (see Preparations), for a K + equilibrium potential of about � mV. Most recordings were at room temperature (22�ଌ) but in some experiments the bath was heated to 35 or 37ଌ with a heated platform and temperature controller (TC-344B Warner Instruments, Hamden, CT).

PHARMACOLOGY.

Stock solutions (100 μM) of α-dendrotoxin (α-DTX) were prepared in distilled water and stored at �ଌ. A 10 mM stock solution of linopirdine in DMSO was prepared fresh on the day of recording. All solutions were diluted to their final concentrations in our standard L-15 solution. Drugs were locally applied to cells using a pressurized superperfusion system (Automate Scientific, Berkeley CA). It was not possible to make tail current activation curves because of interference by Na + currents. Instead, we generated quasi-steady-state g–V curves by measuring the current at 100 ms after step onset and dividing by approximate driving force, calculated as (VmEK). The resulting curves were generally well fit by the Boltzmann equation

where gmax is the maximum conductance, V1/2 is the half-maximum activation voltage, and S is the slope factor. Pharmacology experiments were done with cultured rat neurons (P8–P17, 25�ଌ). Activation curves were made only for recordings with stable series resistances.

ANALYSIS.

We analyzed data with pClamp 10 software (Clampfit MDS Analytical Technologies, Toronto), Matlab (The MathWorks, Natick, MA), and Origin (OriginLabs, Northampton, MA). Data are given as means ± SE. Statistical significance was estimated with Student's unpaired t-test with Welsh's correction for differences in sample variance. Observed significance levels (p value) are reported as significant (p < 0.05, shown as “*” in graphs), very significant (**p < 0.01), and highly significant (***p < 0.001). Input resistance (Rin) was calculated from the voltage change produced by 10-pA hyperpolarizing current steps. We obtained membrane time constant (τm) by fitting a single exponential to the same voltage response. Membrane capacitance (Cm) is the ratio τm/Rm.

We used phase plots (Bean 2007) to estimate spike voltage threshold (Vth) as the voltage Vm, at which dVm/dt changes rapidly. We empirically determined a threshold criterion of 10 mV/ms, above the voltage noise but small enough to allow threshold detection for small spikes.

PSEUDOSYNAPTIC STIMULI.

To drive spiking with stimuli that mimic natural synaptic input, we generated trains of simulated excitatory postsynaptic currents (pseudo-EPSCs) with pseudorandom timing. To represent the shape and size of each pseudo-EPSC, we used alpha functions

The parameter α, which determines the time course of the pseudo-EPSC waveform, was chosen to be 1 ms, to be consistent with voltage-clamp data on EPSCs from vestibular afferent terminals (Rennie and Streeter 2006 RA Eatock and J Xue, unpublished results) and cochlear afferent terminals (Glowatzki and Fuchs 2002). To represent the random timing of quantal synaptic input, we generated an impulse train by drawing times from a Poisson distribution (mean interval: 𢏂 ms) made to be representative of synaptic arrival times in both vestibular and cochlear afferents (Glowatzki and Fuchs 2002 Holt et al. 2006, 2007). Pseudo-EPSC trains were generated by convolving the impulse trains with the EPSC-like alpha functions.

The number of pseudo-EPSC sweeps presented depended on output spike rate. Spikes were detected with the built-in spike thresholding algorithms of Clampfit 10.2. The coefficient of variation (CV) of spike times was the SD divided by the mean interspike interval. CV values are reported only for histograms with enough data points to be well formed, as assessed visually. Consequently, longer recording times were needed for trains with low rates or large SDs in the mean interval. The visual criterion translated to SE values �% of the mean interval at the slowest spike rates. Ongoing recordings were monitored for consistency of resting potential, input resistance, and response to a standard current-step protocol. Recording times were as long as 1 h.

To examine the effect of pseudo-EPSC rate on firing (as shown later in Fig. 13 ), we generated trains of pseudo-EPSCs at uniform, predetermined, intervals (range: 10� ms) rather than at pseudorandom intervals.

Sustained neurons had longer integration times than did transient neurons. Uniformly spaced trains of pseudo-EPSCs were applied to a sustained neuron (A𠄼, P4, neuron from Fig. 10 step-evoked firing pattern shown in D) and a transient neuron (E–G, P5 step-evoked firing pattern shown in H). All spikes are truncated. Stimulus current train, shown below each response, delivered pseudo-EPSCs at intervals of 30 ms (A and E), 20 ms (B and F), or 10 ms (C and G). A𠄼: the sustained neuron integrated pseudo-EPSCs that were individually subthreshold (10 pA), to produce spiking for intervals 㰰 ms. At 30-ms intervals, the neuron did not fire (A) at 20-ms intervals, it began to fire (B) and at 10-ms intervals, it fired faster (C). E–G: the transient neuron integrated little, spiking only for pseudo-EPSCs that were individually suprathreshold responses to 80-pA pseudo-EPSCs are shown. At 30-ms intervals (E), the neuron fired for every pseudo-EPSC. At 20-ms intervals (F), the neuron did not fire for every EPSC, but the timing of each spike was tightly coupled to the timing of a pseudo-EPSC. At 10-ms intervals (G) the neuron fired just one spike at the start of the pseudo-EPSC train.


Community Forum

Epileptic seizures are triggered by abnormal electrochemical impulses that act on other neurons, glands, and muscles to produce human thoughts, feelings, and actions.

In epilepsy, the normal pattern of neuronal activity becomes disturbed, causing strange sensations, emotions, and behavior, or sometimes convulsions, muscle spasms, and loss of consciousness.

During a seizure, neurons may fire as many as 500 times a second, much faster than the normal rate of about 80 times a second. In some people, this happens only occasionally for others, it may happen up to hundreds of times a day.

One of the most-studied neurotransmitters that plays a role in epilepsy is GABA, or gamma-aminobutyric acid, which is an inhibitory neurotransmitter.

Research on GABA has led to drugs that alter the amount of this neurotransmitter in the brain or changes how the brain responds to it. Researchers also are studying excitatory neurotransmitters such as glutamate.

Other biologically related causes that can trigger seizures are the ion channels created by sodium, potassium, and calcium. These ion channels produce electric charges that must fire regularly in order for a steady current to pass from one nerve cell in the brain to another.

If these ion channels are genetically damaged, a chemical imbalance occurs. This can cause nerve signals to misfire, leading to seizures. Abnormalities in the ion channels are believed to be responsible for absence and many other generalized seizures.

Serotonin is a brain chemical that is important for well-being and associated behaviors (eating, relaxation, sleep). Imbalances in serotonin are also associated with depression. A 2005 study indicated that depression may be a risk factor for epilepsy and that the two conditions may share common chemical pathways in the brain.


References

Roberts WM, Almers W. Patch voltage clamping with low-resistance seals: loose patch clamp.Methods Enzymol 1992207:155–176

Stuhmer W, Roberts WM, Almers W. The loose patch clamp. In Sakmann B, Neher (eds): “Single-Channel Recording.” 1983 2nd ed. New York: Plenum Press, pp. 123–132.

Anson BD, Roberts WM. A novel voltage clamp technique for mapping ionic currents form cultured skeletal myotubes.Biophysical Journal 1998 74:2963–2972.

Clarke IJ, Cummins JT. The temporal relationship between gonadotropin-releasing hormone (GnRH) and luteinizing hormone (LH) secretion in ovariectomized ewes.Endocrinology 1982 111:1737–1739.

Levine JE, Ramirez VD. Luteinizing hormone-releasing hormone release during the rat estrous cycle and after ovariectomy, as estimated with push-pull cannulae.Endocrinology 1982 111:1439–1448.

Moenter SM, Caraty A, Locatelli A, Karsch FJ. Pattern of gonadotropin-releasing hormone (GnRH) secretion leading up to ovulation in the ewe: existence of a preovulatory GnRH surge.Endocrinology 1991 129:1175–1182.

Nunemaker CS, DeFazio RA, Moenter SM. Estradiol-sensitive afferents modulate long-term episodic firing patterns of GnRH neurons.Endocrinology 2002 143:2284–2292.

Hille B. Ion channels of excitable membranes. 3rd ed. Sunderland, MA: Sinauer Associates, Inc. 2001.

Sherman-Gold R (ed). The axon guide for electrophysiology and biophysics laboratory techniques. Axon Instruments, 1993. No longer available in print, but is available on-line at http://www.axon.com.

Suter KJ, Song WJ, Sampson TL, Wuarin JP, Saunders JT, Dudek FE, Moenter SM. Genetic targeting of green fluorescent protein to gonadotropin-releasing hormone neurons: characterization of whole-cell electrophysiological properties and morphology.Endocrinology 2000 141:412–419.

Liu HS, Jan MS, Chou CK, Chen PH, Ke NJ. Is green fluorescent protein toxic to the living cells?Biochem Biophys Res Comm 1999 260:712–717.

Velhuis JD, Johnson ML. Cluster analysis: a simple versatile, and robust algorithm for endocrine pulse detection.Am J Physiol Endocrinol Metab 1986 250:E486-E493.

Nunemaker CS, DeFazio RA, Geusz ME, Herzog ED, Pitts GR, Moenter SM. Long-term recordings of networks of immortalized GnRH neurons reveal episodic patterns of electrical activity.J Neurophysiol 2001 86:86–93.

Nunemaker CS, Straume M, DeFazio RA, Moenter SM. Gonadotropin-releasing hormone neurons generate interacting rhythms in multiple time domains.Endocrinology March 2003144 (in press).

Milton RL, Caldwell JH. How do patch clamp seals form? A lipid bleb model.Pflugers Arch Eur J Phys 1990 416:758–762.

Reuss S, Vollrath L. Electrophysiological properties of rat pinealocytes: evidence for circadian and ultradian rhythms.Exp Brain Res 1984 55:455–461.

Schenda J, Vollrath L. Single-cell recordings from chick pineal glands in vitro reveal ultradian and circadian oscillations.Cell Mol Life Sci 2000 57:1785–1792.


Model shows that the speed neurons fire impacts their ability to synchronize

IMAGE: Cell membranes have a voltage across them due to the uneven distribution of charged particles, called ions, between the inside and outside of the cell. Neurons can shuttle ions across. view more

Credit: Image modified from

Research conducted by the Computational Neuroscience Unit at the Okinawa Institute of Science and Technology Graduate University (OIST) has shown for the first time that a computer model can replicate and explain a unique property displayed by a crucial brain cell. Their findings, published today in eLife, shed light on how groups of neurons can self-organize by synchronizing when they fire fast.

The model focuses on Purkinje neurons, which are found within the cerebellum. This dense region of the hindbrain receives inputs from the body and other areas of the brain in order to fine-tune the accuracy and timing of movement, among other tasks.

"Purkinje cells are an attractive target for computational modeling as there has always been a lot of experimental data to draw from," said Professor Erik De Schutter, who leads the Computation Neuroscience Unit. "But a few years ago, experimental research into these neurons uncovered a strange behavior that couldn't be replicated in any existing models."

These studies showed that the firing rate of a Purkinje neuron affected how it reacted to signals fired from other neighboring neurons.

The rate at which a neuron fires electrical signals is one of the most crucial means of transmitting information to other neurons. Spikes, or action potentials, follow an "all or nothing" principle - either they occur, or they don't - but the size of the electrical signal never changes, only the frequency. The stronger the input to a neuron, the quicker that neuron fires.

But neurons don't fire in an independent manner. "Neurons are connected and entangled with many other neurons that are also transmitting electrical signals. These spikes can perturb neighboring neurons through synaptic connections and alter their firing pattern," explained Prof. De Schutter.

Interestingly, when a Purkinje cell fires slowly, spikes from connected cells have little effect on the neuron's spiking. But, when the firing rate is high, the impact of input spikes grows and makes the Purkinje cell fire earlier.

"The existing models could not replicate this behavior and therefore could not explain why this happened. Although the models were good at mimicking spikes, they lacked data about how the neurons acted in the intervals between spikes," Prof. De Schutter said. "It was clear that a newer model including more data was needed."

Fortunately, Prof. De Schutter's unit had just finished developing an updated model, an immense task primarily undertaken by now former postdoctoral researcher, Dr. Yunliang Zang.

Once completed, the team found that for the first time, the new model was able to replicate the unique firing-rate dependent behavior.

In the model, they saw that in the interval between spikes, the Purkinje neuron's membrane voltage in slowly firing neurons was much lower than the rapidly firing ones.

"In order to trigger a new spike, the membrane voltage has to be high enough to reach a threshold. When the neurons fire at a high rate, their higher membrane voltage makes it easier for perturbing inputs, which slightly increase the membrane voltage, to cross this threshold and cause a new spike," explained Prof. De Schutter.

The researchers found that these differences in the membrane voltage between fast and slow firing neurons were because of the specific types of potassium ion channels in Purkinje neurons.

"The previous models were developed with only the generic types of potassium channels that we knew about. But the new model is much more detailed and complex, including data about many Purkinje cell-specific types of potassium channels. So that's why this unique behavior could finally be replicated and understood," said Prof. De Schutter.

The key to synchronization

The researchers then decided to use their model to explore the effects of this behavior on a larger-scale, across a network of Purkinje neurons. They found that at high firing rates, the neurons started to loosely synchronize and fire together at the same time. Then when the firing rate slowed down, this coordination was quickly lost.

Using a simpler, mathematical model, Dr. Sungho Hong, a group leader in the unit, then confirmed this link was due to the difference in how fast and slow firing Purkinje neurons responded to spikes from connected neurons.

"This makes intuitive sense," said Prof. De Schutter. He explained that for neurons to be able to sync up, they need to be able to adapt their firing rate in response to inputs to the cerebellum. "So this syncing with other spikes only occurs when Purkinje neurons are firing rapidly," he added.

The role of synchrony is still controversial in neuroscience, with its exact function remaining poorly understood. But many researchers believe that synchronization of neural activity plays a role in cognitive processes, allowing communication between distant regions of the brain. For Purkinje neurons, they allow strong and timely signals to be sent out, which experimental studies have suggested could be important for initiating movement.

"This is the first time that research has explored whether the rate at which neurons fire affects their ability to synchronize and explains how these assemblies of synchronized neurons quickly appear and disappear," said Prof. De Schutter. "We may find that other circuits in the brain also rely on this rate-dependent mechanism."

The team now plans to continue using the model to probe deeper into how these brain cells function, both individually and as a network. And, as technology develops and computing power strengthens, Prof. De Schutter has an ultimate life ambition.

"My goal is to build the most complex and realistic model of a neuron possible," said Prof. De Schutter. "OIST has the resources and computing power to do that, to carry out really fun science that pushes the boundary of what's possible. Only by delving into deeper and deeper detail in neurons, can we really start to better understand what's going on."

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.


Watch the video: Morning Market Prep. Stock u0026 Options Trading. 11-8-21 (August 2022).