TINGKAT ERROR PADA MODEL NEURON 10-14-1 UNTUK MENENTUKAN ALGORITMA PELATIHAN YANG PALING OPTIMAL

SETIANINGSIH, DINI (2018) TINGKAT ERROR PADA MODEL NEURON 10-14-1 UNTUK MENENTUKAN ALGORITMA PELATIHAN YANG PALING OPTIMAL. Bachelor thesis, Universitas Muhammadiyah Purwokerto.

[img]
Preview
Text
DINI SETIANINGSIH_COVER.pdf

Download (2MB) | Preview
[img]
Preview
Text
DINI SETIANINGSIH_BAB I.pdf

Download (673kB) | Preview
[img]
Preview
Text
DINI SETIANINGSIH_BAB II.pdf

Download (1MB) | Preview
[img]
Preview
Text
DINI SETIANINGSIH_BAB III.pdf

Download (603kB) | Preview
[img] Text
DINI SETIANINGSIH_BAB IV.pdf
Restricted to Repository staff only

Download (928kB)
[img] Text
DINI SETIANINGSIH_BAB V.pdf
Restricted to Repository staff only

Download (1MB)
[img] Text
DINI SETIANINGSIH_BAB VI.pdf
Restricted to Repository staff only

Download (762kB)
[img]
Preview
Text
DINI SETIANINGSIH_DAFTAR PUSTAKA.pdf

Download (766kB) | Preview
[img] Text
DINI SETIANINGSIH_LAMPIRAN.pdf
Restricted to Repository staff only

Download (2MB)

Abstract

Artificial neural networks, especially thebackpropagation method are widely used to solve various problems. In artificial neural networks, the important thing that determines its performance is the training algorithm used.The performance of the training algorithm is said to be optimal in providing solutions can be seen from the errors generated by the network. The smaller the error generated, the more optimal the performance of the algorithm. In the previous study, the most optimal training algorithm based on the results of the smallest error using 5 neuron inputs, 10 neurons in 1 hidden layer, 1 neuron output with the test level α = 5% is the Levenberg-Marquardt algorithm with an average error rate of 0,0002196. In this research, 12 training algorithms were tested to find out the most optimal algorithm was taken from the smallest error rate. This study uses a mixed method, namely development research with qualitative and qualitative (using statistical tests). The research data sources used were random data with 10 neuron inputs, 14 neurons in 1 hidden layer, 1 neuron output with a learning level of 0.01, 0.05, 0.1, 0.2, 0.3, 0.4 , 0.5, 0.6, 0.7, 0.8, 0.9, 1. The conclusion of the study is that training algorithms in backpropagation networks that have the smallest error (most optimal) with network parameter control target error = 0.001, maximum epoch = 10000, learning rate (lr) = 0.2 is the Levenberg-Marquardt algorithm with average rates error of 0,00010132106600.

Item Type: Thesis (Bachelor)
Uncontrolled Keywords: Backpropagation, Artificial Neural Network, training algorithm, error, Levenberg-Marquardt
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Divisions: Fakultas Teknik > Teknik Informatika S1
Depositing User: Amri Hariri, SIP.
Date Deposited: 20 Sep 2021 06:13
Last Modified: 20 Sep 2021 06:13
URI: https://repository.ump.ac.id:80/id/eprint/10415

Actions (login required)

View Item View Item