• Tidak ada hasil yang ditemukan

On the Analysis and Design of the Locust Olfactory System

N/A
N/A
Protected

Academic year: 2023

Membagikan "On the Analysis and Design of the Locust Olfactory System"

Copied!
167
0
0

Teks penuh

Glomeruli are sampled by the ~830 projection neurons (PNs) and ~300 inhibitory local interneurons (LNs) of the antennal lobe. I found that ~50% of the variance in individual KCs' responses could be explained by using at most 1/6th of the PN population.

Encoding of Odor Mixtures: Identification, Categorization, and

Highlights

The responses of many KCs signaled the presence of single odor components in mixtures, even when these single odors were part of an 8-component mixture.

Summary

Again, we found that the mixed responses of individual PNs could in many cases be explained by their responses to one of the components, and that the majority of the population could be decomposed based on a preference for one component. Although the responses of individual KCs were brief, the KC population contained responsive cells at every instant of the PN presynaptic population response, consistent with decoding of PN output by KCs.

Introduction

Humans are usually unable to identify more than ~3 components, sometimes as many as 8–12 known components in a mixture (Jinks and Laing, 1999), while insects and rodents can do better (Hurst and Beynon, 2004; Reinhard et al., 2010). ). For binary mixture experiments, six different concentrations of pure citral and 1-octanol were used individually and in different mixtures.

Results

  • Definitions
  • Representations of Binary Mixtures by Single PNs
  • Representations of Binary Mixtures by PN Populations
  • Representations of Complex Mixtures by Single PNs
  • Representations of Complex Mixtures by PN Populations
  • Kenyon Cell Responses to Mixtures
  • Decoding PN Trajectories Over Time
  • Odor Identification, Categorization, and Generalization from Population

In about half of the single-component adaptations, the response to the "preferred" component dominated the others. For each time bin along each trajectory, we calculated the fraction of response variance.

Discussion

  • Mixture Representations by Single PNs are Partially Explained by the
  • Odor Representations by PN Populations are Ordered
  • Subspace Readout of PNs by KCs
  • Individual KCs are Better Odor Segmenters than Individual PNs
  • Population Decoding from PNs and KCs
  • Experimental Sampling Bias
  • Functional Consequences of Odor Segmentation

One of the odor conditions, cit100:oct100 was present in both the binary mixture and complex mixture experiments. Including this non-responder population and assuming that the sensitivity of the remaining PNs is equally divided among the 8 tested components allows us to predict the number of mixture responses that can be fitted. Our results (Figures 1-7, S1-7) show that both PN and KC populations can be read by linear classifiers in single time bins to perform odor identification, segmentation, and generalization, and that the time course of performance is similar in both populations.

However, in a typical recording session, it was common to record KCs that did not respond to any of the single components.

Methods

  • Preparation and Stimuli
  • Binary Mixture Experiments
  • Complex Mixture Experiments
  • Electrophysiology
  • Recording Constraints and Sampling Biases
  • Extracellular Data Analysis
  • Computational Analysis

For some MB recordings (KCs, LFP), probes were pressed onto the surface of the MB. We could then calculate the probability of the parameters (b, v), given the data y and the model M as. Priors on the constant levels and the intercept of the linear model were uniform over the [0,2] range correlation distance.

For a global measure (Figure 1-4D), we correlated the odor distances and the correlation distances between the trajectories in the response window, in the baseline window, and for the response window, but after mixing the odor labels (separately for each trial) of the responses.

Supplementary Text

  • KC Search
  • Bayesian Model Selection
  • Odor Metrics
  • Balanced Odor Classes for Decoding Categorization and Generalization

When comparing two odor vectors, this distance is 1 minus the ratio of the number of odor components the two odors share to the total number of components present pooled across the two odors. This metric satisfies both of our criteria above, since (a) if two odors share no components, then the numerator of the ratio will be zero and the assigned distance will be 1 (maximum), and (b) if the numerator is held constant while the number of components in both odors increases, the denominator in the ratio will increase, reducing the ratio and increasing the distance. In terms of odors, this distance is 1 minus the ratio of the total number of components shared by two odors to the maximum number of components present in both odors.

It fulfills our first criterion as before, and partly our second criterion, because if the size of the more complex mixture increases, the distance between the two odors will increase.

Supplementary Figures

The unperturbed traces are very similar in the two panels because Spearman rank correlation is sensitive to the rank of the data being correlated, not the actual numerical values. For example, almost half do not respond to any of the odors, while only ~3% for the PNs do not respond to any odors. Average firing rate: Average PN and KC firing rate as a function of the number n of odor components in the mixture.

The time of peak accuracy varies across odor component categories and occurs within 100 ms of odor onset for “W” but no later than ~200 ms after odor offset for “A”.

Full-Rank, Ultra-Sparse Odor Representations

Introduction

Our work provides a number of predictions both about the architecture and learning rules operating in the system, and about the behavioral effects of anatomical and physiological perturbations in the system.

The Biological Circuit

Problem Formulation

The rank of Z is therefore less than or equal to the least of the number of smells and the number of inputs. Thus, if the number of channels ≥ as the number of smells, a solution to the equation can in principle be found (unless there is degeneracy in the matrix Z), for any valence vector v. On the other hand, if the rank is less than the number of odors, then in general no solution exists because valence assignments exist outside the scope of the mapping.

Therefore, the above system always has a solution if the number of input channels ≥ the number of odors, and the encoding matrix is ​​full.

Full-Rank Ultra-Sparse Representations

In order to increase the sparsity of the coding matrix, we then increased the non-linearity threshold of the perceptron. As shown in the blue curve in Figure 2-2D, increasing the threshold increases the KC sparseness as desired. To understand the cause of the rank drop, we examined the coding matrix for the level of rarity at which the rank was greatly reduced.

Adding GGN allows full rank to be maintained at ~ an order of magnitude higher sparsity.

Robustness to Input Noise via Hebbian Learning

The smear radius measures the fragility of the encoding by corrupting the input with noise and measuring the average distance of the resulting representation from that of the uncorrupted encoding. A 200-dimensional input was corrupted by input noise and projected into a 2000-dimensional space and the distance to the uncorrupted encoding was calculated as a function of the maximum amount of added noise (in flipped bits). Flipping up to 10% of the input results in a change in the output representation equal to the density of the encoding.

Plotted are single trials (gray) and the average (red) of the normalized smear radius sampled at different points during learning, showing a rapid decrease to near zero. E) Hebbian learning reduces smear radius to near zero, while still maintaining full rank representations, near maximum peak source entropy (KCs are active almost equally often).

Robustness to Pruning

After learning, prominent peaks are introduced into the distribution, and almost all weights are nonzero. The last three properties can be preserved even if 50% of the weights are randomly pruned (blue-green bars), but the smear radius increases dramatically to almost the value before it is learned.

Online Learning with Near Bayes-Optimal Readout

The sum of the synaptic weights on each bLN is held constant, modeling a receptor pool of fixed (though possibly different) size for each neuron. Show that the learning rule above results in each KC-to-bLN synapse reflecting the probability that the corresponding KC is active during the presentation of excitatory or inhibitory odors (depending on the bLN). The net excitation of each bLN by an odor is proportional to an approximation of the probability of the corresponding valences.

Thus, if one of the bLNs calculates P(+|odor), and the other P(-|odor), and the animal acts on the decision of the bLN with stronger excitation, it will maximize its expected reward.

Asymptotic Mean and Variance of Readout Weights

Now we can look at the time course of the development of weight wi(n+1). Taking the limit as n goes to infinity and observing that a < 0, gives the average steady state value of the weight as. Therefore, the steady-state value of the synaptic weight from the ith KC to +bLN is proportional to the probability of a positive smell, given that the KC is active.

For the second equality, we assumed that the initial value of the weight and the KC activity at that synapse are independent, allowing the variances to sum.

Expected Drive of Excitatory bLN

If this overlap is made small, either by increasing the sparsity of the code (decreasing k), or increasing the number of KCs (D), then the input to the bLN is approximately proportional to the probability of positive valence given the odor . We have now shown that the input to the +bLN in response to an arousing odor u can be made approximately proportional to the probability of positive valence given that odor. The result will be that the excitation of the +bLN will be proportional to the posterior on positive valence given the odor.

Similarly, the excitation of the -bLN will be proportional to the posterior negative valence, given the odor.

Numerical Verification

Again, the values ​​of the weights are roughly centered around their theoretically predicted value, shown in red. The average value over 96 trials of the drive to the excitatory bLN in response to the presentation of each of 200 equally likely excitatory odors, sampled near the end of learning. The procedure was repeated 5 times, each time with a different set of odors. thin curves) of the error rate of the system as a function of time and for three different values ​​of the prior on positive odors.

The classification performance of the system was measured on the entire untreated odor set periodically during learning, and the mean ± 3 S.E.M.s calculated over 5 trials are plotted above for three different fractions of excitatory odors.

Discussion

Long-term disabling of the GGN will lead to forgetting/corruption of odor representations, as Hebbian plasticity at input overrides established PN-to-KC weights; One of the simplifying assumptions we made was to approximate the antennal lobe function as a mapping of odors into single, dense, binary vectors where each element is generated independently. This approach ignores the temporal dynamics of the antennal lobe, which produces decor-related (although not necessarily independent) odor representations over the course of several oscillatory cycles (Mazor and Laurent, 2005).

Sensory processing in the Drosophila antennal lobe increases the reliability and resolution of ensemble odor representations.

Referensi

Dokumen terkait

1.1 A typical example of a J-V curve and b power output versus voltage curve recorded under light illumination 6 1.2 A typical EQE spectrum of a perovskite solar cell 8 1.3