• Tidak ada hasil yang ditemukan

Deep learning

N/A
N/A
Protected

Academic year: 2025

Membagikan "Deep learning"

Copied!
37
0
0

Teks penuh

(1)

Deep Learning

(2)
(3)

Perceptron

Rosenblatt, 1950

(4)

Multi-layer perceptron

Michael Nielsen, 2016

(5)

Sigmoid function

(6)

Artificial Neural Network 3-layer

(7)
(8)

Digit recognition NN

24x24 = 784

0.0 white 1.0 black

(9)

Training NN

Backpropagation is a fast way to compute this, 1986

(10)

Convolutional Neural Network

• 3 main types of layers

• Convolutional layer

• Pooling layer

• Fully Connected layer

(11)

First layer

Drawing by Michael Zibulevsky

(12)

Feature map

(13)

Pooling operation

(14)

Activation function

(15)

CIFA-10 image dataset

(16)

CIFA-10 dataset

• CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and

10000 test images.

(17)

Example of CNN layer

(18)

Convolutional layer

(19)

Parameters

ImageNet challenge in 2012

images 227x227x3

convolutional layer

receptive field F = 11, S = 4, with 96 filters

55x55x96 = 290,400 neurons

each neuron connects to 11x11x3 = 363+1 bias weights

total 290400*364 = 105,705,600 parameters

(20)

Parameter sharing

• Volume 55x55x96 has

• 96 depth slices of size 55x55 each

• Each slice uses the same weights

• Now

• Total 96x11x11x3 = 34,848 + 96 bias

(21)

96 filters of 11x11x3 each

Krizhevsky et al. 2012

(22)

Pooling or downsampling

(23)

Case studies

• LeNet. The first successful applications of Convolutional Networks

were developed by Yann LeCun in 1990’s, was used to read zip codes, digits, etc.

(24)

Object Recognition

(25)
(26)

AlphaGo vs Lee Seidol, March

2016

(27)

Monte Carlo Tree Search

(28)

Alpha Go

"Mastering the game of Go with deep neura l networks and tree search"

. Nature. 529 (7587): 484–489.

• Deep learning

• Monte Carlo Tree search

(29)

Future of Go Summit 2017

(30)

Automatic Speech

Recognition

(31)

Long Short Term Memory

(32)
(33)

Computation power

• The second generation TPU was announced in May 2017. The individual TPU ASICs are rated at 45 TFLOPS

(34)
(35)
(36)

Top500.org

cores TFlop/s Power kW 1 Sunway 10M 93K 15K

...

5 Sequoia 1M 15K 7.8K ..

10 Trinity 0.3M 8K 4K ...

500 Bull 0.01M 0.4K 0.2K

(37)

State of the art

• Learn everything

• No user program

• Self improvement

Referensi

Dokumen terkait

Proses ini berjalan dengan melakukan ekstraksi wajah pada citra menggunakan Multi- task Cascaded Convolutional Neural Networks (MTCNN). MTCNN digunakan untuk mendeteksi

The information used in this study, which explores the ideas of deep learning and convolutional neural networks (CNN), taken from publications or papers on

Khorasani, "Deep Convolutional Neural Networks and Learning ECG Features for Screening Paroxysmal Atrial Fibrillation Patients", IEEE Transactions on Systems, Man, and Cybernetics:

Deep Learning-Based Segmentation A trained Deep segmentation network Deep-Net based on Deeplabv3+ architecture using the weights of resnet18 pre-train Convolutional Neural Networks

Keywords: Convolutional networks Deep neural network Image classification Mango leaf diseases Transfer learning This is an open access article under the CC BY-SA license.. Patil