• Tidak ada hasil yang ditemukan

N etworks in the brain

Dalam dokumen Networks Second Edition (Halaman 101-108)

A completely different use of networks in biology arises in the study of the brain and central nervous system in animals. Two broad classes of brain networks are studied: microscopic networks of connections between individual brain cells and macroscopic networks of functional connection between entire brain regions.

5.2.1 Networks of neurons

One of the main functions of the brain is to process information, and the primary information processing element is theneuron, a specialized brain cell that combines (usually) several inputs to generate a single output. Depending on the species of animal, an entire brain can contain anywhere from a handful of neurons to more than a hundred billion, all wired together, the output of one cell feeding the input of another, to create aneural networkcapable of remarkable feats of calculation and decision making.

Figure 5.6 shows a sketch of a typical neuron, which consists of a cell body or soma, along with a number of protruding tentacles, which are essentially wires for carrying signals in and out of the cell. Most of the wires are inputs, calleddendrites, of which a neuron may have just one or two, or as many as a thousand or more. Most neurons have only one main output, called theaxon, which is typically longer than the dendrites and may in some cases extend over

5.2 | N

etworks in the brain

large distances to connect the cell to others some way away. Although there is just one axon, it usually branches near its end to allow the output of the cell to feed the inputs of several others. The tip of each branch ends at an axon terminalthat abuts the tip of the input dendrite of another neuron. There is a small gap, called asynapse, at the junction of the terminal and dendrite, across which the output signal of the first neuron must be conveyed in order to reach the second. The synapse plays an important role in the function of the brain, allowing the strength of the connection from one cell to another to be regulated by modifying the properties of the junction.5

The actual signals that travel within neurons are electrochemical in nature.

They consist of traveling waves of electrical voltage created by the motion of positively charged sodium, calcium, or potassium ions in and out of the cell. These waves are called action potentialsand involve voltage changes on the order of tens of millivolts traveling at tens of meters per second. When an action potential reaches a synapse, it cannot cross the gap between the axon terminal and the opposing dendrite by itself and the signal is instead transmitted chemically; the arrival of the action potential stimulates the release of a chemical neurotransmitter, which diffuses across the gap and activates receptor molecules at the other side. This in turn causes ions to move in and out of the dendrite, changing its voltage.

These voltage changes, however, do not yet give rise to another traveling wave. The soma of a neuron combines the inputs from its dendrites and as a result may (or may not) send an output signal down its own axon. The neuron is typically stable against perturbations caused by voltage changes at a small number of its inputs, but if enough inputs are excited they can collectively drive the neuron into an unstable runaway state in which it “fires,” generating a new action potential that travels down the cell’s axon, and so a signal is passed on to the next neuron or neurons in the network. Thus, the neuron acts as a switch or gate that aggregates the signals at its inputs and only fires when enough inputs are excited.

As described, inputs to neurons are excitatory, increasing the chance that the neuron fires, but inputs can also be inhibiting—signals received at inhibiting inputs make the receiving neuron less likely to fire. Excitatory and inhibiting in- puts can be combined in a single neuron and the combination allows neurons to perform quite complicated information processing tasks all on their own, while

5Neurons do sometimes have direct connections between them without synapses. These direct connections are calledgap junctions, a confusing name, since it sounds like it might be a description of a synapse but is in reality quite different. In our brief treatment of neural networks, however, we will ignore gap junctions.

an entire brain or brain region consisting of many neurons can perform tasks of extraordinary complexity. Current science cannot yet tell us exactly how the brain performs the more sophisticated cognitive tasks that allow animals to survive and thrive, but it is known that the brain constantly changes both the pattern and strength of the connections between neurons in response to inputs and experiences, and it is presumed that the details of these connections—the neural network—hold much of the secret. An understanding of the structure of neural networks is thus crucial if we are to explain the higher-level functions of the brain.

At the simplest level, a neuron can be thought of as a unit that accepts a number of inputs, either excitatory or inhibiting, combines them, and generates an output that is sent to one or more further neurons. In network terms, a neural network can thus be represented as a set of nodes—the neurons—connected by two types of directed edges, one for excitatory inputs and one for inhibiting inputs. (In this respect, neural networks are similar to the genetic regulatory networks of Section 5.1.3, which also contain both excitatory and inhibiting connections.) By convention, excitatory connections are denoted by an edge A wiring diagram for a

small neural network.

ending with an arrow “ ”, while inhibiting connections are denoted by an edge ending with a bar “ ”.

Neurons are not all the same. They come in a variety of different types and even relatively small regions or circuits in the brain may contain many types.

This variation can be encoded in our network representation by different types of nodes. Visually the types are often denoted by using different shapes for the nodes or by labeling. In functional terms, neurons can differ in a variety of ways, including the number and type of their inputs and outputs, the nature and speed of their response to their inputs, whether and to what extent they can fire spontaneously without receiving inputs, and many other things besides.

Experimental determination of the structure of neural networks is difficult and the lack of straightforward experimental techniques for probing network structure is a major impediment to current progress in neuroscience. Some use- ful techniques do exist, however, although their application can be extremely laborious.

The basic tool for structure determination is microscopy, either optical or electronic. One relatively simple approach works with cultured neurons on flat dishes. Neurons taken from animal brains at an early stage of embryonic development can be cultivated in a suitable nutrient medium and will, without prompting, grow synaptic connections to form a network. If grown on a flat surface, the network is then roughly two-dimensional and its structure can be determined with reasonable reliability by simple optical microscopy. The advantage of this approach is that it is quick and inexpensive, but it has the

5.2 | N

etworks in the brain

Figure 5.7: Brain circuitry of a worm. A portion of the neural circuitry of the wormC. elegans, reconstructed by hand from electron micrographs of slices through the worm’s brain [470]. Reproduced from J. G. White, E. Southgate, J. N. Thomson, and S. Brenner, The structure of the nervous system of the nematodeCaenorhabditis elegans,Phil. Trans.

R. Soc. B314(1165), 1–340 (1986), by permission of the Royal Society.

disadvantage that the networks studied are substantially different from the brains of real living animals.

In this respect, studies of real brains are more satisfactory and likely to lead to greater insight, but they are also far harder, because real brains are three-dimensional and techniques for imaging three-dimensional structure are less well developed than for two. The oldest and best established approach is to cut suitably preserved brains or brain regions into thin slices, whose struc- ture can then be determined by ordinary two-dimensional electron microscopy.

Given the structure of a set of consecutive slices, one can, at least in principle, reconstruct the three-dimensional structure, identifying different types of neu- rons by their appearance, where possible. In the early days of such studies, reconstruction was done by hand, but more recently researchers have devel- oped computer programs that can significantly speed up the process [231].

Nonetheless, studies of this kind are laborious and can take months or years to

complete, depending on the size and complexity of the network studied.

Figure 5.7 shows an example of a “wiring diagram” of a neural network, reconstructed by hand from electron microscope studies of this type [470]. The network in question is the neural network of the wormCaenorhabditis elegans, one of the best studied organisms in biology. The brain ofC. elegansis simple—

it has less than 300 neurons and essentially every specimen of the worm has the same wiring pattern. Several types of neurons, denoted by shapes and labels, are shown in the figure, along with a number of different types of connections, both excitatory and inhibiting. Some of the edges run off the page, connecting to other parts of the network not shown. The experimenters determined the structure of the entire network and presented it as a set of interconnected wiring diagrams like this one.

Figure 5.8: A historical neural network im- age.An early image of a collection of neurons, hand-drawn from optical microscope observa- tions by Ramón y Cajal. Reproduced courtesy of the Cajal Institute: Cajal Legacy, Spanish Na- tional Research Council (CSIC), Madrid, Spain.

The reconstruction of neural networks from slices in this way is the current gold standard in the field, but its laborious nature has led researchers to ask whether more direct meth- ods of measurement might be possible. In the past few years a number of new methods have emerged that hold significant promise for faster and more accurate network structure deter- mination. Most of these methods are based on optical (rather than electron) microscopy, which is something of a throwback to earlier days. Santiago Ramón y Cajal, the Nobel-prize- winning pathologist regarded by many as the father of neuro- science, pioneered the modern study of neuroanatomy with his beautiful hand-drawn illustrations of brain cells, created by staining slices of brain tissue with colored dyes and then examining them through an optical microscope (see Fig. 5.8).

Current optical techniques do essentially the same thing, al- beit with more technological sophistication.

Staining of brain tissue is crucial to making brain cells visible at optical wavelengths—without it there is not enough contrast between the neurons and surrounding tissue to make a clear picture. Early studies such as those of Ramón y Cajal used simple injected dyes, but modern studies use a range of more exotic techniques, particularly genetically engineered strains of laboratory animals, most often mice, that generate their own stains. This is usually done by introducing genes into the mice that produce fluorescent substances within brain cells such as the so-calledgreen fluorescent protein or GFP, a widely used marker that was originally discovered, naturally occurring, in a certain species of jellyfish. Fluorescent proteins

5.2 | N

etworks in the brain

emit visible light when illuminated in the ultraviolet, light that can be pho- tographed to create pictures of the neurons.

A crucial problem with optical imaging of brain tissue, however, is the sheer density of neurons; they are packed so tightly together—tens of thousands or more per cubic millimeter—that it is often difficult to tell them apart from one another. To get around this problem, researchers make use of a selection of different fluorescent proteins, including the original jellyfish GFP as well as various variants and alternatives, each emitting a different color of light.

A particularly elegant implementation of this idea is the technique known asBrainbow[302], in which each neuron generates a random combination of different fluorescent proteins and each combination corresponds to a unique, identifiable color of emitted light. With a suitable palette of proteins the num- ber of distinguishable colors can be as high as a hundred. The experimenter then makes separate images of the neurons with each of these colors, neurons which ideally are sparse enough to allow clear visualization of their shapes and positions, then combines the images to create a picture of the overall network.

While elegant, this approach does not solve the fundamental problem of having to slice up the brain to photograph it. Brainbow and techniques like it are still, at least for now, most often applied to slices. However, our new-found ability to clearly distinguish brain cells using optical techniques does open the door to the possibility of true 3D imaging if one can find a way to perform optical microscopy on whole brains or brain regions (something that is fundamentally impossible with electron microscopy). The fundamental tool for doing this is theconfocal microscope, a type of microscope that uses special optics, combined with computer post-processing, to image the light coming from just a single two-dimensional slice of a three-dimensional space. By scanning the imaged slice through a sample one can then build up a picture of the entire three- dimensional structure. This doesn’t completely solve our problem, however, because in order to focus light from a region in the interior of a brain the light still needs to get out of the brain in the first place, which normally it cannot do because the rest of the brain is in the way. One promising approach for resolving this issue is the technique calledClarity[105], which is a method for rendering brain tissue transparent by infusing it with a hydrogel. Once the tissue becomes transparent one can photograph its entire three-dimensional structure with a confocal microscope without needing to slice it up.

Methods such as these can allow one to visualize the positions and shapes of neurons in brains or brain regions, but they do not directly give us the topol- ogy of the corresponding neural network. For that, one must carefully analyze the pictures taken, following the path of each axon or dendrite to determine which neurons connect with which. And while this is certainly possible, it is

a laborious and sometimes error-prone task with current techniques. A quite different approach, which directly measures connections between neurons, is transsynaptic tracing, which involves injecting a tracer molecule—most com- monly wheat germ agglutinin or WGA—into the brain, where it is absorbed by a subset of the neurons then transported along the axons of those neurons, across the synapses, and into the neighboring cells. In one ingenious version of the method the WGA is tagged with green fluorescent protein so that its final distribution can be photographed directly, from which one can then work out to which neighbors the outputs of a neuron connect. A variant on the same idea, calledretrograde tracing, makes use of tracers that are naturally transported backwards across the synapse, allowing one to determine inputs. In more re- cent versions of these approaches researchers have replaced tracers like WGA with viruses that infect the neurons and spread from one to another, again allowing one to determine which cells are connected to which.

Optical imaging and transsynaptic tracing techniques are promising but still in their infancy. There is not yet (at the time of writing) any example of a large- scale network reconstruction, similar to that of Fig. 5.7, using these techniques.

Still, this is a time of rapid advances in brain imaging and there is every hope that, probably within just a few years, these methods will have progressed to the point where they can give us significant insight into the structure of neural networks.

5.2.2 Networks of functional connectivity in the brain

A different class of brain networks are networks of macroscopic functional connectivity between large-scale regions of the brain. In these networks the nodes represent entire brain regions, usually regions that are already known to perform some function such as vision, motor control, or learning and mem- ory, and the edges represent some kind of functional connection, often only loosely defined, whereby one region controls or feeds information to another.

The structure of these macroscopic networks can shed light on the logical or- ganization of the brain—how information processing occurs or how different processes are interlinked—while avoiding the microscopic details of connec- tion between individual brain cells. In principle macroscopic brain networks, while still complex, are much simpler than neuronal networks, the former con- taining typically tens or hundreds of nodes, where the latter could potentially contain billions. Macroscopic networks also have the advantage that they can be observed in living brains, including in humans, which cannot currently be done for their microscopic counterparts.

The primary technique for observing macroscopic network structure in the

Dalam dokumen Networks Second Edition (Halaman 101-108)