TINGKAT KECEPATAN ALGORITMA PEMBELAJARAN PADA JARINGAN BACKPROPAGATION PADA MODEL NEURON 15-26-1 DAN MODEL NEURON 15-29-1

SOLEHAH, ALPIANA (2019) TINGKAT KECEPATAN ALGORITMA PEMBELAJARAN PADA JARINGAN BACKPROPAGATION PADA MODEL NEURON 15-26-1 DAN MODEL NEURON 15-29-1. Bachelor thesis, UNIVERSITAS MUHAMMADIYAH PURWOKERTO.

[img]
Preview
Text
ALPIANA SOLEHAH_COVER.pdf

Download (2MB) | Preview
[img]
Preview
Text
ALPIANA SOLEHAH_BAB I.pdf

Download (733kB) | Preview
[img]
Preview
Text
ALPIANA SOLEHAH_BAB II.pdf

Download (1MB) | Preview
[img] Text
ALPIANA SOLEHAH_BAB III.pdf
Restricted to Repository staff only

Download (1MB)
[img] Text
ALPIANA SOLEHAH_BAB IV.pdf
Restricted to Repository staff only

Download (1MB)
[img] Text
ALPIANA SOLEHAH_BAB V.pdf
Restricted to Repository staff only

Download (726kB)
[img]
Preview
Text
ALPIANA SOLEHAH_DAPUS.pdf

Download (729kB) | Preview
[img] Text
ALPIANA SOLEHAH_LAMPIRAN.pdf
Restricted to Repository staff only

Download (1MB)

Abstract

Backpropagation is a learning algorithm that exists in artificial neural network and is much in demand to solve problems. Backropagation algorithms are monitored and are usually used by perceptrons with many layers to change the weights connectedd to neurons in hidden layers. In ANN there are two learning algorithms, namely guided learning algorithm and non-guided learning algorithm. One of the guided learning algorithms is backpropagation. The performance of this algorithm is influenced by several network parameters including the number of neurons in the input layer, the number of neurons in hidden layers, the maximum epoch, the magnitude of the learning rate. Based on this in this study, 12 training algorithms were tested with ANOVA statistical tests using alpha (α) = 5% to get the output results in the form of the speed of each training algorithm based on learning rates. This research uses a mixed method, namely development research with qualitative and quantitative testing (test using statistics). The research data is generated using 12 algorithms as many as 20 loops for each learning rate (lr) of each training algorithm. In the results of the research model 15 input neurons of 26 hidden layers and 1 output of the fastest algorithm at learning rate (lr) = 0.9 with a mean value of 0.006800 seconds produced by the Gradient Descent algorithm (traingd). And in the results of the model 15 neurons input 29 hidden layers and 1 output of the fastest algorithm at the learning rate (lr) = 0.01 with an average value of 0.006705 seconds produced by the training algorithm Scaled Conjugate Gradient (trainscg).

Item Type: Thesis (Bachelor)
Uncontrolled Keywords: Backpropagation, Speed, learning rate, neurons, hidden layers
Subjects: Q Science > QA Mathematics > QA76 Computer software
Divisions: Fakultas Teknik > Teknik Informatika S1
Depositing User: Iin Hayuningtyas
Date Deposited: 21 Mar 2022 02:19
Last Modified: 21 Mar 2022 02:19
URI: https://repository.ump.ac.id:80/id/eprint/11085

Actions (login required)

View Item View Item