TINGKAT KECEPATAN JARINGAN BACKPROPAGATION PADA MODEL NEURON 15-20-1 DAN 15-18-1 UNTUK MENENTUKAN ALGORITMA PELATIHAN YANG PALING OPTIMAL

MAHARANI, TIYAS (2019) TINGKAT KECEPATAN JARINGAN BACKPROPAGATION PADA MODEL NEURON 15-20-1 DAN 15-18-1 UNTUK MENENTUKAN ALGORITMA PELATIHAN YANG PALING OPTIMAL. Bachelor thesis, Universitas Muhammadiyah Purwokerto.

[img] Text
TIYAS MAHARANI COVER.pdf

Download (2MB)
[img] Text
TIYAS MAHARANI BAB I.pdf

Download (945kB)
[img] Text
TIYAS MAHARANI BAB II.pdf

Download (1MB)
[img] Text
TIYAS MAHARANI BAB III.pdf
Restricted to Repository staff only

Download (1MB)
[img] Text
TIYAS MAHARANI BAB IV.pdf
Restricted to Repository staff only

Download (1MB)
[img] Text
TIYAS MAHARANI BAB V.pdf
Restricted to Repository staff only

Download (935kB)
[img] Text
TIYAS MAHARANI DAPUS.pdf

Download (933kB)
[img] Text
TIYAS MAHARANI LAMPIRAN.pdf
Restricted to Repository staff only

Download (7MB)

Abstract

Backpropagation is one of the learning models in artificial neural networks (ANN) that is demanded by many people. This is because many applications are successfully completed with this learning model. Some parameters that affected the performance of algorithms include the number of neurons in the input layer, neurons in hidden layers, epoch maximum used and the learning rate. The previous research had tested the level of network speed by using 10 neurons in the input layer, 12 neurons in the hidden layer with alpha (α) = 5% and yielding an average time of 0.007785 ± 0.0005480 seconds at the learning rate = 0.2 which is in the Gradient learning algorithm Descent with Adaptive Learning Rate (traingda). The previous research had not discussed speed levels in 12 training algorithms using 15 neurons in the input layer. Therefore in this study 12 training algorithms were tested to determine the most optimal level of backpropagation network speed by using a mixed method with alpha (α) = 5%. The first model was tested using 18 neurons in the hidden layer and obtained an average speed = 0.007761000 at the learning rate = 0.9 which is located in the One Step Secant (trainoss) training algorithm. The second model uses 20 neurons in the hidden layer obtained by the average speed = 0.007085000 at the learning rate = 1.0 which is located in the Scaled Conjugate Gradient (trainscg) training algorithm. Both training algorithms are the most optimal training algorithms compared to the others.

Item Type: Thesis (Bachelor)
Uncontrolled Keywords: Backpropagation, speed, One Step Secant, Scaled Conjugate Gradient, Learning rate.
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Divisions: Fakultas Teknik > Teknik Informatika S1
Depositing User: Indra Himawan
Date Deposited: 07 Jul 2022 07:19
Last Modified: 07 Jul 2022 07:19
URI: http://repository.ump.ac.id/id/eprint/12463

Actions (login required)

View Item View Item