PERBANDINGAN TINGKAT ERROR ALGORITMA PELATIHAN JARINGAN BACKPROPAGATION MENGGUNAKAN MODEL NEURON 15-22-1 DAN 15-25-1

ANGGASTA, TIARA GIOFANI (2019) PERBANDINGAN TINGKAT ERROR ALGORITMA PELATIHAN JARINGAN BACKPROPAGATION MENGGUNAKAN MODEL NEURON 15-22-1 DAN 15-25-1. Bachelor thesis, Universitas Muhammadiyah Purwokerto.

[img] Text
Tiara Giofani Anggasta Cover.pdf

Download (2MB)
[img] Text
Tiara Giofani Anggasta BAB I.pdf

Download (1MB)
[img] Text
Tiara Giofani Anggasta BAB II.pdf

Download (1MB)
[img] Text
Tiara Giofani Anggasta BAB III.pdf
Restricted to Repository staff only

Download (1MB)
[img] Text
Tiara Giofani Anggasta BAB IV.pdf
Restricted to Repository staff only

Download (1MB)
[img] Text
Tiara Giofani Anggasta BAB V.pdf
Restricted to Repository staff only

Download (971kB)
[img] Text
Tiara Giofani Anggasta Dapus.pdf

Download (1MB)
[img] Text
Tiara Giofani Anggasta Lampiran.pdf
Restricted to Repository staff only

Download (7MB)

Abstract

Backpropagation is a guided algorithm that functions to train the network to obtain the best weight. Backpropagation is the ideal solution for problems that cannot be easily formulated using algorithms. In this case the influence to determine its performance is the training algorithm used. Training algorithms in backpropagation networks are influenced by several network parameters. The performance of the training algorithm is said to be optimal judging by the error generated by the network with a maximum epoch = 10000 and the target error = 0.001. In the previous study using 5 input neurons, 7 hidden layer neurons, 1 neuron in the output layer obtained the smallest error result, namely 0.000126175395 ± 0.0001591834121 at the rate of rate (lr) = 0.6 with the Levenberg-Marquardt algorithm. In this study using 12 training algorithms to obtain the most optimal algorithm. The method used is a mixed method, namely development research with quantitative and qualitative testing. The data sources used are random data with 15 input neurons with learning rate 0.01, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0. The results showed that the training algorithm in the backpropagation network in models 15-22-1 in the learning rate (lr) = 1.0 showed that the Levenberg-Marquardt algorithm was the most optimal training algorithm with an error value of 0.00009767945050. Meanwhile, in the 15-25-1 model of the learning rate (lr) = 0.6 and 0.8, the Levenberg-Marquardt algorithm is the most optimal training algorithm with an error value of 0,00013827017600.

Item Type: Thesis (Bachelor)
Uncontrolled Keywords: Backpropagation, training algorithm, error, Levenberg-Marquardt.
Subjects: Z Bibliography. Library Science. Information Resources > ZA Information resources
Divisions: Fakultas Teknik > Teknik Informatika S1
Depositing User: Indra Himawan
Date Deposited: 18 Jul 2022 02:34
Last Modified: 18 Jul 2022 02:34
URI: http://repository.ump.ac.id/id/eprint/12602

Actions (login required)

View Item View Item