TINGKAT ERROR ALGORITMA PELATIHAN JARINGAN BACKPROPAGATION BERDASARKAN LAJU PEMAHAMAN (LEARNING RATE) PADA MODEL NEURON 10-12-1

HAKIM, LUKMAN NUL (2018) TINGKAT ERROR ALGORITMA PELATIHAN JARINGAN BACKPROPAGATION BERDASARKAN LAJU PEMAHAMAN (LEARNING RATE) PADA MODEL NEURON 10-12-1. Bachelor thesis, Universitas Muhammadiyah Purwokerto.

[img]
Preview
Text
LUKMAN NUL HAKIM_COVER.pdf

Download (1MB) | Preview
[img]
Preview
Text
LUKMAN NUL HAKIM_BAB I.pdf

Download (715kB) | Preview
[img]
Preview
Text
LUKMAN NUL HAKIM_BAB II.pdf

Download (941kB) | Preview
[img]
Preview
Text
LUKMAN NUL HAKIM_BAB III.pdf

Download (645kB) | Preview
[img] Text
LUKMAN NUL HAKIM_BAB IV.pdf
Restricted to Repository staff only

Download (919kB)
[img] Text
LUKMAN NUL HAKIM_BAB V.pdf
Restricted to Repository staff only

Download (1MB)
[img] Text
LUKMAN NUL HAKIM_BAB VI.pdf
Restricted to Repository staff only

Download (706kB)
[img]
Preview
Text
LUKMAN NUL HAKIM_DAFTAR PUSTAKA.pdf

Download (710kB) | Preview
[img] Text
LUKMAN NUL HAKIM_LAMPIRAN.pdf
Restricted to Repository staff only

Download (3MB)

Abstract

Artificial neural networks are biologically inspired computational models, artificial neural networks consist of several processing elements (neurons) and there are connections between neurons. In artificial neural networks there is a backpropagation method. Backpropagation is a supervised learning algorithm and is usually used by perceptrons with many layers to change weights - weights that are connected to neurons in hidden layers. The performance of the training algorithm is said to be optimal in providing solutions can be seen from the errors generated by the network. The smaller the error generated, the more optimal the performance of the algorithm. In the previous study, the most optimal training algorithm based on the results of the smallest error using 5 input data neuron, 10 hidden layer, 1 output with the test level α = 5% was the Levenberg-Marquardt algorithm. In this research, 12 training algorithms were tested to find out the most optimal algorithm in terms of the smallest error rate. The research data sources used are random data with 10 input neurons, 12 neurons in 1 layer hidden layer, 1 neuron output with learning rate 0.01, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1 The conclusion of the study is that training algorithms in backpropagation networks that have the smallest error (most optimal) with network parameter control target error = 0.001, maximum epoch = 10000, learning rate (lr) = 0.2 is the Levenberg-Marquardt algorithm with an average error rate of 0.000114842766.

Item Type: Thesis (Bachelor)
Uncontrolled Keywords: Artificial Neural Networks, Backpropagation, training algorithms, Leraning rate, Levenberg-Marquardt
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Divisions: Fakultas Teknik > Teknik Informatika S1
Depositing User: Amri Hariri, SIP.
Date Deposited: 29 Sep 2021 06:17
Last Modified: 29 Sep 2021 06:18
URI: http://repository.ump.ac.id/id/eprint/10475

Actions (login required)

View Item View Item