PENENTUAN ALGORITMA PELATIHAN YANG PALING OPTIMAL PADA MODEL NEURON 10-18-1 BERDASARKAN TINGKAT ERROR

SETIAWAN, DIMAS RIFKIE (2019) PENENTUAN ALGORITMA PELATIHAN YANG PALING OPTIMAL PADA MODEL NEURON 10-18-1 BERDASARKAN TINGKAT ERROR. Bachelor thesis, UNIVERSITAS MUHAMMADIYAH PURWOKERTO.

[img]
Preview
Text
COVER_DIMAS RIFKIE SETIAWAN_TI'19.pdf

Download (1MB) | Preview
[img]
Preview
Text
BAB I_DIMAS RIFKIE SETIAWAN_TI'19.pdf

Download (645kB) | Preview
[img]
Preview
Text
BAB II_DIMAS RIFKIE SETIAWAN_TI'19.pdf

Download (800kB) | Preview
[img] Text
BAB III_DIMAS RIFKIE SETIAWAN_TI'19.pdf
Restricted to Repository staff only

Download (576kB)
[img] Text
BAB IV_DIMAS RIFKIE SETIAWAN_TI'19.pdf
Restricted to Repository staff only

Download (822kB)
[img] Text
BAB V_DIMAS RIFKIE SETIAWAN_TI'19.pdf
Restricted to Repository staff only

Download (985kB)
[img] Text
BAB VI_DIMAS RIFKIE SETIAWAN_TI'19.pdf
Restricted to Repository staff only

Download (644kB)
[img]
Preview
Text
DAFTAR PUSTAKA_DIMAS RIFKIE SETIAWAN_TI'19.pdf

Download (645kB) | Preview
[img] Text
LAMPIRAN_DIMAS RIFKIE SETIAWAN_TI'19.pdf
Restricted to Repository staff only

Download (941kB)

Abstract

Artificial neural networks are biologically inspired computational models, artificial neural networks consist of several processing elements (neurons) and there are connections between neurons. In artificial neural networks there is a backpropagation method. Backpropagation is a supervised learning algorithm and is usually used by perceptrons with many layers to change weights - weights that are connected to neurons in hidden layers. The performance of the training algorithm is said to be optimal in providing solutions can be seen from the errors generated by the network. The smaller the error generated, the more optimal the performance of the algorithm. In the previous study, the most optimal training algorithm based on the results of the smallest error using 5 input data neurons with the test level α = 5% was the (Levenberg-Marquardt) trainlm algorithm. In this research, 12 training algorithms were tested to find out the most optimal algorithm in terms of the smallest error rate. The research data sources used are random data with 10 input neurons, 18 neurons in 1 layer hidden layer, 1 neuron output with learning rate 0.01, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1 The conclusion of the study is that training algorithms that have the smallest error (most optimal) with target error = 0.001, maximum epoch = 10000, learning rate (lr) = 0.8 is the Levenberg-Marquardt algorithm with an average error rate of 0.000129748405 + 0.0002289567366.

Item Type: Thesis (Bachelor)
Additional Information: Pembimbing: Hindayati Mustafidah, S.Si.,M.Kom.
Uncontrolled Keywords: Artificial Neural Networks, Backpropagation, training algorithms, Leraning rate, Levenberg-Marquardt
Subjects: Q Science > QA Mathematics > QA76 Computer software
Divisions: Fakultas Teknik > Teknik Informatika S1
Depositing User: Iin Hayuningtyas
Date Deposited: 06 Feb 2019 01:19
Last Modified: 06 Feb 2019 01:19
URI: https://repository.ump.ac.id:80/id/eprint/8539

Actions (login required)

View Item View Item