• Tidak ada hasil yang ditemukan

Results

Dalam dokumen SPINNAKER (Halaman 196-200)

Chapter 6 From Activations to Spikes 160

6.3 Handwritten Digit Recognition with Spiking DBNs

6.3.1 Results

Robustness of spikingDBNsto reduced bit precision of fixed-point synapses and input noise:This section summarises the findings of the investigation on the effect of reduced weight precision of a trained spike-basedDBNand its robustness to input sensory noise.

Figure6.8demonstrates the effect of reduced bit precision on the trained weights of the spikingDBNof O’Connoret al.[183]. More specifically, Figure6.8(a) shows the receptive fields of the first 6 neurons of the first hidden layer for different fixed- point weight resolutions. As can be visually observed, a lot of the structural infor- mation of the receptive fields is preserved, even for bit a precision of f = 4bits.

(a)

Precision of weights

Q3.8 Q3.3 Q3.2 Q3.1

Synapses pruned (%)

0 20 40 60 80 100

(b)

Q3.4

Figure 6.8.Impact of weight bit precision on the representations within a DBN. (a) The receptive fields of the first 6 neurons (rows) in the first hidden layer of the DBN with the two hidden layers. (b) Percentage of synapses from all layers that are set to zero due to the reduction in bit precision for the fractional part. Figure by Stromatias [241] with minor modifications.

Figure6.8(b) presents the percentage of weights that were truncated to zero due to the fixed-point rounding.

Figure6.9(a) illustrates the classification accuracy (CA) of the spikingDBNon theMNISTtest set as a function of input noise and bit weight resolution for two different input firing rates (100 and 1,500 Hz), for an input stimulus of 1 second.

Both curves show that the performance drops as the percentage of input noise increases, but for higher firing rates (1,500 Hz) the performances remains con- stant until the input noise reaches a 50% level. The peak performance stays at almost identical levels to the double floating-point precision even for bit precisions off =3. Figure6.9(b) shows the area under the curve; a larger area translates to a higher classification performance. As in (a), a similar trend can be observed; higher input firing rates result in an increase in CA. Figure 6.9(c) demonstrates the CA for different bit weight precisions as the input firing rates increase, from 100 Hz to 1,500 Hz, for two different input noise levels, 0% and 60%. Finally, the plots in Figure6.9(d) show that there is a wide range of input noise levels and bit weight resolutions in which the performance remains remarkably high for the two input rates, 100 Hz and 1,500 Hz. For all experiments, the performance dropped signif- icantly when a bit weight precision off =1was used. For a bit weight precision off =2, the CA remained approximately at 80% for 100 Hz and above 90% for firing rates higher than 600 Hz.

These findings illustrate that, indeed, the spike-basedDBNsexhibit the desired robustness to input noise and numerical precision. For a weight precision of Q3.3 (6 bits per weight), the classification performance is on a par with double floating-point precision (64 bits per weight). For this particular spiking DBN,

(a) (b)

(c) (d)

Figure 6.9.Effect of reduced weight bit precision and input noise on the classification accuracy (CA) of the spiking DBN with two hidden layers. (a) CA as a function of input noise and bit precision of synaptic weights for two specific input spike rates of 100 and 1,500 Hz. Results over four trials. (b) Normalised area under curve in (a) for different percentages of input noise, input firing rates and weight bit precision. Higher values mean higher accuracy and better robustness to noise. (c) CA as a function of the weight bit resolution for different input firing rates and for two different noise levels, 0% and 60%.

(d) CA as a 2D function of the bit resolution of the weights and the percentage of input noise for 100 Hz and 1,500 Hz input rate. The results confirm that spiking DBNs with low precision weights down to f =3 bits can still reach high-performance levels and tolerate high levels of input noise. Figure by Stromatiaset al.[243] with minor modifications.

which consists of 642,510 synapses, this means that for a weight precision of Q3.3, only 0.46 MBytes are required for storing all the weights instead of 4.9 MBytes.

Moreover, one of the effects of the reduced precision is that many of the weights become zero, as seen in Figure 6.8(b), due to rounding, and thus, they can be pruned. The benefits of pruning the zeroed weights may include faster execution times due to avoiding unnecessary memory look-ups, as well as being able to exe- cute deeper neural networks on the same hardware.

Table 6.3.Classification accuracy (CA) of the same DBN with two hidden layers running on different platforms [241].

Simulator CA (%) Weight precision Description

MATLAB 96.06 Double Rate-based (Siegert)

Brian 95.00 Double Clock-driven

O’Connoret al.[183] 94.09 Double ?

SpiNNaker 94.94 Q3.8 Hybrid

Minitaur [176] 92.00 Q5.11 Event-driven

Table6.3summarises a comparison between theSpiNNakerplatform and var- ious hardware and software simulators, including the Brian SNN simulator, for the MNISTclassification problem. The SpiNNaker results are very close to the results of the software simulation with only a 0.06% difference despite the fact that SpiNNakeruses less precise weights than standard software implementations.2 Classification Latency and Power Requirements of Spiking DBNs: To inves- tigate the real-time performance of the O’Connor et al. [183] spikingDBN on SpiNNaker, two experiments were conducted. The first experiment investigated the mean classification latency and accuracy of the spikingDBNas a function of the number of input spikes, while the second experiment aimed at measuring the mean classification latency of the spike-based DBNrunning on SpiNNaker. An addi- tional experiment was performed to investigate the power requirements of the spik- ingDBNrunning on a singleSpiNNakerchip as a function of the input firing rate.

For the experiments described in this section, the followingSpiNNakerconfigu- ration was used: theARM9 processor clocks were set to 200 MHz, the routers and system busses were set to 100 MHz, while the off-chip memory clocks were set to 133 MHz.

For the first experiment, the static images of theMNISTtest digits were con- verted to spike trains with input firing rates ranging from 500 Hz up to the point where additional input spikes per second had no effect on the mean classification accuracy. Each experiment was executed for four trials and results were averaged across all trials.

Results showed that for the spikingDBN, increasing the input firing rate reduces the mean classification latency as observed in Figure 6.10(a). More specifically, when the input firing rate is 1,500 Hz, the mean classification latency becomes

2. A video of a spikingDBNrunning onSpiNNakerand recognising a handwritten digit can be seen here:

https://youtu.be/f -Xi2Y4TB58

Dalam dokumen SPINNAKER (Halaman 196-200)