PENENTUAN ALGORITMA PELATIHAN YANG PALING OPTIMAL PADA MODEL NEURON 10-12-1 BERDASARKAN KECEPATAN JARINGAN

Rahman, Yusa Aulia (2018) PENENTUAN ALGORITMA PELATIHAN YANG PALING OPTIMAL PADA MODEL NEURON 10-12-1 BERDASARKAN KECEPATAN JARINGAN. Bachelor thesis, UNIVERSITAS MUHAMMADIYAH PURWOKERTO.

[img]
Preview
Text
YUSA AULIA RAHMAN COVER.pdf

Download (1MB) | Preview
[img]
Preview
Text
YUSA AULIA RAHMAN BAB I.pdf

Download (659kB) | Preview
[img]
Preview
Text
YUSA AULIA RAHMAN BAB II.pdf

Download (863kB) | Preview
[img]
Preview
Text
YUSA AULIA RAHMAN BAB III.pdf

Download (530kB) | Preview
[img] Text
YUSA AULIA RAHMAN BAB IV.pdf
Restricted to Repository staff only

Download (799kB)
[img] Text
YUSA AULIA RAHMAN BAB V.pdf
Restricted to Repository staff only

Download (1MB)
[img] Text
YUSA AULIA RAHMAN BAB VI.pdf
Restricted to Repository staff only

Download (583kB)
[img]
Preview
Text
YUSA AULIA RAHMAN DAPUS.pdf

Download (657kB) | Preview
[img] Text
YUSA AULIA RAHMAN LAMPIRAN.pdf
Restricted to Repository staff only

Download (1MB)

Abstract

Backpropagation is one of the supervised and popular learning models in artificial neural networks. Some parameters that affect the performance of the backpropagation learning algorithm include the number of neurons in the input layer, neurons in hidden layers (hidden layers), epoch maximum used and the learning rate. The performance of the training algorithm is said to be optimal can be seen from the speed produced. The smaller the speed produced, the more optimal the performance of the algorithm. Tests conducted in previous studies with 10 neurons in hidden layer (hidden layer) obtained that the most optimal training algorithm based on the results of the smallest error pelathian Levenberg-Marquardt algorithm (trainlm) with an average MSE 0.001 with a test level of α = 5%. This research was conducted to obtain the most optimal algorithm in terms of network speed with network parameters used in the form of neurons in the input layer, neurons in hidden layers, neurons in the output layer, learning rate with maximum epoch = 10,000. This study uses a mixed method, namely development research with qualitative and quantitative testing (using ANOVA statistical tests). The research data is taken from random data with 10 input neurons in the input layer, 12 neurons in the hidden layer (hidden layer) and 1 neuron in the output layer. The results of the analysis in this study indicate that the Gradient Descent training algorithm with Momentum and Adaptive Learning Rate (traingdx) is the most optimal algorithm based on network speed at the learning rate = 1.0 with an average value of 0.007785 ± 0.0004955 seconds.

Item Type: Thesis (Bachelor)
Uncontrolled Keywords: Backpropagation,hidden layer,Gradient Descent with Momentum and Adaptive Learning Rate, ANOVA
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Divisions: Fakultas Teknik > Teknik Informatika S1
Depositing User: Danarto Kh
Date Deposited: 29 Sep 2021 05:28
Last Modified: 29 Sep 2021 05:28
URI: https://repository.ump.ac.id:80/id/eprint/10468

Actions (login required)

View Item View Item